Skip to content

TensorFlow oneDNN build manual for FUJITSU Software Compiler Package (TensorFlow v2.7.0)

xdote-m edited this page Mar 31, 2022 · 2 revisions

Build Instruction for TensorFlow on Fujitsu Supercomputer PRIMEHPC FX1000/FX700

Table of contents

  1. Introduction
  2. Environment and Prerequisites
  3. Installation Instructions
  4. Troubleshooting
  5. List of Software Version

1. Introduction

This document contains instructions for installing TensorFlow on a Fujitsu Supercomputer PRIMEHPC FX1000 or FX700.
It also provides sample instructions for installing and running several important models optimized for the FX1000 and FX700.

On building TensorFlow, bazel, a building tool used in it, download third-party software from the Internet.
However, since there are many requests to build on the systems in the isolated facilities such as corporate laboratories, we experimentally provide the installation instructions for such environments (in this manual, this is called "offline installation"). Please notice that this procedure is still at the beta test level and is not guaranteed to work.

For offline installation, you first download a set of necessary files beforehand on a system connected to the Internet (the "download system"), and then transfer them to the system to be installed (the "target system").

1.1. Terminology

The following terms and abbreviations are used in this manual.

Terms/Abbr. Meaning
Online Installation Install TensorFlow on a system with direct access to the Internet (or via proxy)
Offline Installation Install TensorFlow on a system that does not have direct access to the Internet
Target system System on which TensorFlow is to be installed
Download system System for downloading the necessary files in advance for offline installation
TCS FX1000's job execution scheduler and compiler library environment (Technical Computing Suite)
CP FX700's compiler library environment (Compiler Package)

2. Environment and prerequisites

2.1. Download System

  • OS: UNIX or Linux
  • Following software is available: bash, python, wget, git, unzip, tar, and curl
  • Accessible to the Target System
  • Sufficient free space in the file system The amount of downloaded data is about 41GB.
    Modules Data Size
    TensorFlow source 2GB
    Extra files needed for TensorFlow build 2GB
    Sample Model ResNet 5MB
    Sample Model OpenNMT 5GB
    Sample Model BERT 1GB
    Sample Model Mask R-CNN 30GB
    Total 41GB

The download directory is under the TensorFlow source directory.

2.2. Target system for installation

  • PRIMEHPC FX1000 or FX700
  • For FX700
    • RHEL 8.x or CentOS 8.x must be installed
    • If you want to use FCC, Compiler Package V10L20 must be installed
  • The following packages and commands should be already installed
    make gcc cmake libffi-devel gcc-gfortran numactl git patch unzip tk tcsh tcl lsof python3 pciutils
    (For Mask R-CNN sample model) libxml2 libxslt libxslt-devel libxml2-devel

Please note that building and executing on NFS may cause unexpected problems depending on the performance and configuration of the NFS server.
It is recommended to use locally-attached storage or network storage that is fast enough.

2.3. Directory structure after installation

The directory structure after installation looks like this. The directories PREFIX, VENV_PATH, and TCSDS_PATH are specified in the configuration file env.src. This three directories, and TENSORFLOW_TOP must be independent each other. (Make sure that one directory is not under another directory.)

  PREFIX (where local binaries are stored)
    +- bin (Python, etc.)
    +- lib

  VENV_PATH (location of python modules needed to run TensorFlow)
    +- bin (activate)
    +- lib (packages to be installed by pip)

  TCSDS_PATH (Fujitsu compiler, *: already installed before the procedure)
    +- bin (fcc, FCC, etc.)
    +- lib64

  TENSORFLOW_TOP (complete TensorFlow source, transferred from the download system or downloaded from https://www.github.com/fujitsu/tensorflow)
    +- tensorflow
    +- third_party
    +- fcc_build_script (TensorFlow Build Scripts)
         +- down (downloaded files will be stored)
         +- sample_script (source for ResNet, OpenNMT, BERT, and Mask RCNN models, and trainig data, will be extracted under here)

2.4. About proxy settings

If your environment requires proxy to the external access, please set the following environment variables.
(Replace "user", "pass", "proxy_url", and "port" with the ones appropriate for your environment.)

$ export http_proxy=http://user:pass@proxy_url:port
$ export https_proxy=https://user:pass@proxy_url:port

Note: curl, wget, git, and pip3 recognize the above environment variables, so edit of rc or .gitconfig is unnecessary.

3. Installation procedure

The general installation flow is as follows:

  1. Preparation (Common for online/offline installation)
  2. Download (Offline installation only)
  3. Build (Common for online/offline installation)

3.1. Preliminaries (Detail)

3.1-A. Download the source set

$ git clone https://github.com/fujitsu/tensorflow.git
$ cd tensorflow                  # From now on, we'll call this directory TENSORFLOW_TOP
$ git checkout -b fujitsu_v2.7.0_for_a64fx origin/fujitsu_v2.7.0_for_a64fx
$ cd fcc_build_script

In the following examples, /home/user/tensorflow is used as TENSORFLOW_TOP.

3.1-B. Edit env.src

'env.src' is configuration file, which is located in $TENSORFLOW_TOP/fcc_build_script.

The configuration is divided into two parts.

  • Control of the Build

    Flag Name Default Value Meaning Remarks
    fjenv_use_venv True Use VENV when true 'false' is not tested.
    fjenv_use_fcc True Use FCC when true, otherwise, use GCC 'false' is not tested.
    fjenv_offline_install 'false' 'true' for offline installation

    Note that these flags are defined as shell variables in 'env.src', but it can also be set as an environment variable outside of 'env.src'. In that case, the environment variable setting takes precedence over the setting in 'env.src'.

  • Set up the building directory.
    For the directory configuration, Refer to the diagram in Chapter 2.3.

    Variable name Meaning Supplemental information
    PREFIX Directory to install the executable generated by this construction procedure.
    VENV_PATH name of the directory where VENV is installed Valid when use_venv=true
    TCSDS_PATH name of the base directory for TCS and CP (base directory: a directory containing bin, lib, etc.) Valid when use_fcc=true

It is not necessary to alter other settings than mentioned above.

3.2. Download (Detail)

This section is only for offline installation. If you are installing on an Internet-connected system, skip this section and go to 3.3.

3.2-A. Download the Files for TensorFlow

Run the shell scripts starting with 0 in the fcc_build_script directory, one by one in numbering order, with the argument download.
The destination is $TENSORFLOW_TOP/fcc_build_script/down.

$ pwd
/home/user/tensorflow/fcc_build_script          # $TENSORFLOW_TOP/fcc_build_script

$ bash 01_python_build.sh        download       # Download Python
$ bash 02_bazel_build.sh         download       # Download bazel
$ bash 03_make_venv.sh           download       # Download Python modules for TensorFlow
$ bash 04_numpy_scipy.sh         download       # Download NumPy and SciPy
$ bash 05-1_build_batchedblas.sh download       # Download BatchedBlas
$ bash 05_tf_build.sh            download       # Download Modules for TensorFlow build
$ bash 07_horovod_install.sh     download       # Download Horovod

The scripts are designed so that it will not download files that has already been downloaded. If you want to download files again, run each script with clean argument first, and then run it with download. Please note that clean has higher priority than download, so if you specify clean download or download clean, only clean is performed.

3.2-B. (Optional) Download the Files for Sample Models

The sample models are located under $TENSORFLOW_TOP/fcc_build_script/sample_script, with dedicated directory starting with the number. Run the shell scripts starting with the number in each directory one by one in numbering order, with the argument download.

For the sample model and training data, the download directory is the model directory, and others than that will be downloaded into $TENSORFLOW_TOP/fcc_build_script/down.

The scripts are designed so that it will not download files that has already been downloaded. If you want to download files again, run each script with clean argument first, and then run it with download.

Note that the training data is not deleted even with clean because it usually takes a lot of time to download or re-create the training data. If you really want to delete the training data, remove the following data directory manually.

  • $TENSORFLOW_TOP/fcc_build_script/sample_script/02_OpenNMT/dataset
  • $TENSORFLOW_TOP/fcc_build_script/sample_script/03_Bert/cased_L-12_H-768_A-12*
  • `$TENSORFLOW_TOP/fcc_build_script/sample_script/04_Mask-R-CNN/dataset

3.2-C. Transfer to the Target System

Transfer everything under $TENSORFLOW_TOP to the install system.

We do not describe the transfer method, as it depends on your system configuration.
Use scp, ftp, a shared filesystem, or any other method appropriate for your system.

3.3. Build (Detail)

3.3-A. Build TensorFlow

Run the shell scripts with name starting number, in numbering order, one after the other.
The following example shows how to install with an interactive shell. The approximate time is shown as a comment in each command (measured on an FX700 2.0GHz 48core).

If you are using the job control system, you can, for example, create a batch script that executes a series of build scripts, and then submit the batch scripts. In that case, it is recommended to add special shell command that enables to terminate the script run on error of the build script (such as set -e in bash).

[option] is an option flag to pass to the script. If omitted, the build is executed.

The scripts are designed so that it will not build again when the binary has already exist. If you want to build again, run each script with rebuild argument.

Please do not confuse with clean. If it is specified, then all the download files are deleted, that requires you to download again, and transfer in offline installation.

$ pwd
/home/user/tensorflow/fcc_build_script          # $TENSORFLOW_TOP/fcc_build_script

$ bash 01_python_build.sh          [option]     # Build and install Python (5 min.)
$ bash 02_bazel_build.sh           [option]     # Install bazel (< 1 min.)
$ bash 03_make_venv.sh             [option]     # Create VENV (< 1 min.)
$ bash 04_numpy_scipy.sh           [option]     # Build NumPy and SciPy (90 min.)
$ bash 05-1_build_batchedblas.sh   [option]     # Build BatchedBlas (<1 min.)
$ bash 05_tf_build.sh              [option]     # Build TensorFlow (120 min.)
$ bash 07_horovod_install.sh       [option]     # Install Horovod (10 min.)

To verify the build, run the sample model in sample_script/01_resnet.

3.3-B. (Optional) Build Sample Models

The sample models are located in the subdirectory starting with number under sample_script directory. Run the shell scripts with name starting number, in numbering order, one after the other.

The detail of the build and verfication is described in below.
For the verifycation of the speed, since the execution speed of deep learning models can vary by 10~20%, you can use the execution speed described in this manual as a guide, and if it is within the certain range, your build is OK.

CAUTION: The sample models provided here are slightly modified from the originals for operation checks and performance analysis purposes, such as the random number seed may be fixed for profile collection, or the model may be set to abort after a certain number of runs. So please do not use the model as is for actual learning.

Also, please keep in mind that the settings of the sample model is not optimal.

01_resnet

Use the official model (for TensorFlow v1.x) from Google. https://github.com/tensorflow/models/tree/v2.0/official/r1/resnet
Tag: v2.0 (2019/10/15)

$ pwd
/home/user/tensorflow/fcc_build_script/sample_script/01_resnet

$ bash 10_setup_resnet.sh  [option]  # Setup the model (< 1 min.)
$ bash run1proc.sh                   # Run (1 node, 1 proc., 12 cores, use dummy data)
$ bash run1node.sh                   # Run (1 node, 4 proc., 12 cores/proc., use dummy data)

Scripts for two more nodes are not provided. Please create your own based on run1node.sh.

The following is the example of result. (See the line pointed with the arrow sign).

    $ bash run1proc.sh
	(snip)
    INFO:tensorflow:cross_entropy = 7.4513965, learning_rate = 0.0, train_accuracy = 0.0
    I0320 15:34:57.914131 281473099739712 basic_session_run_hooks.py:270] cross_entropy = 7.4513965, learning_rate = 0.0, train_accuracy = 0.0
    INFO:tensorflow:loss = 8.846884, step = 0
    I0320 15:34:57.917531 281473099739712 basic_session_run_hooks.py:270] loss = 8.846884, step = 0
    INFO:tensorflow:global_step/sec: 0.152233
    I0320 15:35:04.481962 281473099739712 basic_session_run_hooks.py:718] global_step/sec: 0.152233
    INFO:tensorflow:loss = 8.846884, step = 1 (6.565 sec)
    I0320 15:35:04.482757 281473099739712 basic_session_run_hooks.py:268] loss = 8.846884, step = 1 (6.565 sec)
    INFO:tensorflow:global_step/sec: 0.503296
    I0320 15:35:06.468833 281473099739712 basic_session_run_hooks.py:718] global_step/sec: 0.503296
--> INFO:tensorflow:loss = 8.840118, step = 2 (1.987 sec)
    I0320 15:35:06.469572 281473099739712 basic_session_run_hooks.py:268] loss = 8.840118, step = 2 (1.987 sec)
    INFO:tensorflow:global_step/sec: 0.503718
    I0320 15:35:08.454073 281473099739712 basic_session_run_hooks.py:718] global_step/sec: 0.503718
--> INFO:tensorflow:loss = 8.820971, step = 3 (1.985 sec)
	(snip)
    INFO:tensorflow:Starting evaluation at 2022-03-20T15:35:34
    I0320 15:35:34.029292 281473099739712 evaluation.py:254] Starting evaluation at 2022-03-20T15:35:34
    INFO:tensorflow:Graph was finalized.
    I0320 15:35:35.488217 281473099739712 monitored_session.py:247] Graph was finalized.
    INFO:tensorflow:Restoring parameters from /home/user/tensorflow/fcc_build_script/sample_script/01_resnet/run_20220320_153359/model.ckpt-10
    I0320 15:35:35.489164 281473099739712 saver.py:1399] Restoring parameters from /home/user/tensorflow/fcc_build_script/sample_script/01_resnet/run_20220320_153359/model.ckpt-10
    INFO:tensorflow:Running local_init_op.
    I0320 15:35:37.194845 281473099739712 session_manager.py:531] Running local_init_op.
    INFO:tensorflow:Done running local_init_op.
    I0320 15:35:37.274327 281473099739712 session_manager.py:534] Done running local_init_op.
    INFO:tensorflow:step = 1 time = 1.845 [sec]
    I0320 15:35:39.972107 281473099739712 resnet_run_loop.py:760] step = 1 time = 1.845 [sec]
    INFO:tensorflow:Evaluation [1/10]
    I0320 15:35:39.972590 281473099739712 evaluation.py:167] Evaluation [1/10]
--> INFO:tensorflow:step = 2 time = 0.575 [sec]
    I0320 15:35:40.547780 281473099739712 resnet_run_loop.py:760] step = 2 time = 0.575 [sec]
    INFO:tensorflow:Evaluation [2/10]
    I0320 15:35:40.548261 281473099739712 evaluation.py:167] Evaluation [2/10]
--> INFO:tensorflow:step = 3 time = 0.586 [sec]
	(snip)

The execution time for each step is displayed. First, 10 steps of learning are performed, followed by 10 steps of inference. The first step takes time because it also performs initialization, so check the time taken after the second step.

For FX700 (2.0 GHz), the expected training results for run1proc.sh and run1node.sh is about 2sec, and the expected inference results are about 0.6 sec,

Note that run1node.sh launches four TensorFlows, and each TensorFlow runs same workload as run1proc.sh, so the overall processing volume is four times larger, that causes them to take slight longer time in each step.

02_OpenNMT

Learn to translate by entering English and German sentences in pairs.

https://github.com/OpenNMT/OpenNMT-tf/tree/v2.11.0
Tag: v2.11.0 (2020/6/17)

$ pwd
/home/user/tensorflow/fcc_build_script/sample_script/02_OpenNMT

$ bash 20_setup_OpenNMT.sh   [options]   # Setup (5 min.)
$ bash run1proc.sh                       # Run the model (1 node, 1 proc., 24 cores, en-de)
$ bash run1node.sh                       # Run the model (1 node, 2 proc., 24 cores/proc, en-de)

Scripts for two more nodes are not provided. Please create your own based on run1node.sh.

The following is the example of result. (See the line pointed with the arrow sign).

    INFO:tensorflow:Number of model weights: 260 (trainable = 260, non trainable = 0)
    INFO:tensorflow:Step = 1 ; steps/s = 0.01, source words/s = 35, target words/s = 35 ; Learning rate = 0.000000 ; Loss = 10.499719
    INFO:tensorflow:Saved checkpoint run_20220320_153804/testrun/ckpt-1
    INFO:tensorflow:Step = 2 ; steps/s = 0.09, source words/s = 516, target words/s = 504 ; Learning rate = 0.000000 ; Loss = 10.497824
    INFO:tensorflow:Step = 3 ; steps/s = 0.22, source words/s = 1259, target words/s = 1226 ; Learning rate = 0.000000 ; Loss = 10.496581
    INFO:tensorflow:Step = 4 ; steps/s = 0.23, source words/s = 1336, target words/s = 1331 ; Learning rate = 0.000001 ; Loss = 10.491531
        (snip)
--> INFO:tensorflow:Step = 10 ; steps/s = 0.24, source words/s = 1365, target words/s = 1377 ; Learning rate = 0.000001 ; Loss = 10.479089
--> INFO:tensorflow:Step = 11 ; steps/s = 0.24, source words/s = 1350, target words/s = 1317 ; Learning rate = 0.000001 ; Loss = 10.476576
--> INFO:tensorflow:Step = 12 ; steps/s = 0.23, source words/s = 1307, target words/s = 1320 ; Learning rate = 0.000002 ; Loss = 10.471901
--> INFO:tensorflow:Step = 13 ; steps/s = 0.24, source words/s = 1325, target words/s = 1325 ; Learning rate = 0.000002 ; Loss = 10.468829
--> INFO:tensorflow:Step = 14 ; steps/s = 0.24, source words/s = 1348, target words/s = 1360 ; Learning rate = 0.000002 ; Loss = 10.459951
--> INFO:tensorflow:Step = 15 ; steps/s = 0.24, source words/s = 1364, target words/s = 1333 ; Learning rate = 0.000002 ; Loss = 10.462379
--> INFO:tensorflow:Step = 16 ; steps/s = 0.24, source words/s = 1364, target words/s = 1371 ; Learning rate = 0.000002 ; Loss = 10.451959
--> INFO:tensorflow:Step = 17 ; steps/s = 0.24, source words/s = 1340, target words/s = 1333 ; Learning rate = 0.000002 ; Loss = 10.449181

Check the target words/s output at each step. Since performance is unstable for a first few steps, please look at the 10th step and beyond.

On FX700 (2.0GHz), the expected result of run1proc.sh is about 1350 target words/sec, and the expected result of run1node.sh is about 2300 target words/sec.

03_Bert

Use the official model from Google.

https://github.com/tensorflow/models/tree/v2.7.0/official/nlp
Tag: v2.7.0 (2021/11/16)

Note: Previously, we had provided two tasks, pre-training and fine tuning, but since the arithmetic processing content is almost the same for both, we decided to provide only the pre-training task, which is more computationally challenging one than the other.

$ pwd
/home/user/tensorflow/fcc_build_script/sample_script/03_Bert

$ bash 300_setup_bert.sh                [options]    # Setup (5 min.)
$ bash 311_create_pretraining_data.sh                # Prepare pre-training data (1 min.)
$ bash run1proc.sh                                   # Run pre-training task (1 node, 1 proc., 24 cores)
$ bash run1node.sh                                   # Run pre-training task (1 node, 1 proc., 24 cores/proc)

Scripts for two more nodes are not provided. Please create your own based on run1node.sh.

The following is the example of result. (See the line pointed with the arrow sign).

    I0329 11:25:12.250460 281473158656576 model_training_utils.py:288] Loading from checkpoint file completed
    I0329 11:25:58.921862 281473158656576 model_training_utils.py:518] Train Step: 1/20  / loss = 12.329887390136719  masked_lm_accuracy = 0.000000  lm_example_loss = 11.555142 next_sentence_accuracy = 0.395833  next_sentence_loss = 0.774745
    I0329 11:25:58.923070 281473158656576 keras_utils.py:145] TimeHistory: 46.61 seconds, 1.03 examples/second between steps 0 and 1
    I0329 11:26:03.738602 281473158656576 model_training_utils.py:518] Train Step: 2/20  / loss = 12.409647941589355  masked_lm_accuracy = 0.000000  lm_example_loss = 11.572385 next_sentence_accuracy = 0.312500  next_sentence_loss = 0.837263
    I0329 11:26:03.739170 281473158656576 keras_utils.py:145] TimeHistory: 4.79 seconds, 10.03 examples/second between steps 1 and 2
    I0329 11:26:08.520679 281473158656576 model_training_utils.py:518] Train Step: 3/20  / loss = 12.32846450805664  masked_lm_accuracy = 0.000000  lm_example_loss = 11.509584  next_sentence_accuracy = 0.437500  next_sentence_loss = 0.818880
        (snip)
--> I0329 11:26:37.495301 281473158656576 keras_utils.py:145] TimeHistory: 4.79 seconds, 10.02 examples/second between steps 8 and 9
    I0329 11:26:42.298977 281473158656576 model_training_utils.py:518] Train Step: 10/20  / loss = 9.174365043640137  masked_lm_accuracy = 0.025974  lm_example_loss = 8.481404  next_sentence_accuracy = 0.562500  next_sentence_loss = 0.692961
--> I0329 11:26:42.299548 281473158656576 keras_utils.py:145] TimeHistory: 4.78 seconds, 10.05 examples/second between steps 9 and 10
    I0329 11:26:47.065550 281473158656576 model_training_utils.py:518] Train Step: 11/20  / loss = 8.464371681213379  masked_lm_accuracy = 0.070133  lm_example_loss = 7.815945  next_sentence_accuracy = 0.666667  next_sentence_loss = 0.648427
--> I0329 11:26:47.066114 281473158656576 keras_utils.py:145] TimeHistory: 4.74 seconds, 10.13 examples/second between steps 10 and 11
    I0329 11:26:51.883372 281473158656576 model_training_utils.py:518] Train Step: 12/20  / loss = 8.288689613342285  masked_lm_accuracy = 0.098573  lm_example_loss = 7.585730  next_sentence_accuracy = 0.520833  next_sentence_loss = 0.702960
        (snip)

Check the examples/second output at each step. Since performance is unstable for a first few steps, please look at the 10th step and beyond.

On FX700 (2.0GHz), the expected training results for run1proc.sh and run1node.sh are 9.5~10 examples/sec.

Note that run1node.sh launches four TensorFlows, and each TensorFlow runs same workload as run1proc.sh, so the overall processing volume is four times larger, that causes them to take slight longer time in each step.

Note that in run1node.sh, two processes output their respective results. The overall processing volume is sum of the results. Because of this, the performance of each process is slightly lower than run1proc.sh.

        (snip)
    I0320 15:53:23.390330 281473826665024 keras_utils.py:145] TimeHistory: 4.94 seconds, 9.72 examples/second between steps 11 and 12
    I0320 15:53:23.392809 281473556394560 keras_utils.py:145] TimeHistory: 4.94 seconds, 9.72 examples/second between steps 11 and 12
    I0320 15:53:28.290043 281473826665024 keras_utils.py:145] TimeHistory: 4.89 seconds, 9.81 examples/second between steps 12 and 13
    I0320 15:53:28.291527 281473556394560 keras_utils.py:145] TimeHistory: 4.89 seconds, 9.82 examples/second between steps 12 and 13
        (snip)

04_Mask-R-CNN

Use the official model from Google.

https://github.com/tensorflow/models/tree/master/research/object_detection
Commit id: dc4d11216b (2020/11/8)

$ pwd
/home/user/tensorflow/fcc_build_script/sample_script/04_Mask-R-CNN

$ bash 40_setup_mask-r-cnn.sh              # Setup (20 min.)
$ bash 41_dataset.sh                       # Download the training data (26GB) (3 hours 30 min.)
$ bash run1proc.sh                         # Run (1 node, 1 proc., 24 cores)
$ bash run1node.sh                         # Run (1 node, 1 proc., 24 cores/proc)

Scripts for two more nodes are not provided. Please create your own based on run1node.sh.

The following is the example of result. (See the line pointed with the arrow sign).

    INFO:tensorflow:Step 1 per-step time 181.858s loss=9.109
    INFO:tensorflow:Step 2 per-step time 4.071s loss=9.131
       (snip)
    INFO:tensorflow:Step 20 per-step time 4.378s loss=4.496
--> INFO:tensorflow:Avg per-step time 4.592s Avg per-step batch 0.436

On FX700 (2.0GHz), the expected result for run1proc.sh is around 0.45 batch/sec, and expected result for run1node.sh is around 0.65 batch/sec.

Note that in run1node.sh, two processes output their respective results, but batch/sec is calculated based on the total number of batches. (the output from each process is not strictly the same, because of the elapsed time is measured by each process).

    INFO:tensorflow:Step 19 per-step time 6.789s loss=4.601
    INFO:tensorflow:Step 20 per-step time 6.313s loss=4.492
--> INFO:tensorflow:Avg per-step time 6.206s Avg per-step batch 0.645
    INFO:tensorflow:Step 20 per-step time 6.578s loss=4.569
--> INFO:tensorflow:Avg per-step time 6.232s Avg per-step batch 0.642

4. Troubleshooting

In '04_make_venv.sh', error occurred during building numpy.

Two causes are possible.

  • If you get an error about _ctype, you are missing libffi-devel, try yum install.
  • If you cannot find Fortran, yum gcc-gfortran.

python3 is not working

When all of the following conditions are met, you will get "cannot execute binary file: Exec format error" message.

  • Offline installation is being performed.
  • The download system is other than FX1000 or FX700 (e.g. PRIMERGY or other x86 server).
  • The download system and target system share the network storage, and you are trying to install on it.
  • You have already built TensorFlow and are going to build a sample model later.

In this case, please do one of the following 1 or 2.

  1. Download everything first, then build it.
  2. Separate the download directory and build directory

5. List of Software Version

The major softwares version are listed in the table.

Software Version License Remarks
Python 3.9.x (2021/10/4 or thereafter) GPL 'x' depends on the installation date (use the latest commit in the branch 3.9)
TensorFlow 2.7.0 (2021/10/31) Apache 2.0
bazel 4.2.2 (2021/12/02) Apache 2.0
oneDNN v2.4.0 (2021/10/5) Apache 2.0
BatchedBlas 1.0 (2021/2/9) BSD-3
Horovod 0.23.0 (2021/10/6) Apache 2.0
NumPy 1.22.x (2021/12/30 or thereafter) BSD-3 'x' depends on the installation date (use the latest commit in the branch 1.22)
SciPy 1.7.x (2021/6/19 or thereafter) BSD-3 'x' depends on the installation date (use the latest commit in the branch 1.7)

For other software modules, basically the latest available versions at the time of installation is used.

pip3 list

After running the installation script, a file named pip3_list.txt will be generated. The following is the contents of the file after installing TensorFlow and all sample models (as of 3/26/2022). Note that the number of module versions may change depending on the installation date.

Package                Version
---------------------- ---------
absl-py                0.13.0
astunparse             1.6.3
cachetools             4.2.7
certifi                2021.5.30
charset-normalizer     2.0.4
cloudpickle            1.6.0
contextlib2            21.6.0
cppy                   1.1.0
cycler                 0.10.0
Cython                 0.29.24
dataclasses            0.6
gast                   0.3.3
gin-config             0.4.0
google-auth            1.34.0
google-auth-oauthlib   0.4.5
google-pasta           0.2.0
grpcio                 1.29.0
h5py                   2.10.0
horovod                0.19.5
idna                   3.2
Keras-Applications     1.0.8
Keras-Preprocessing    1.1.2
kiwisolver             1.3.1
lvis                   0.5.3
lxml                   4.6.3
Markdown               3.3.4
matplotlib             3.3.2
numpy                  1.18.4
oauthlib               3.1.1
OpenNMT-tf             2.11.0
opt-einsum             3.3.0
Pillow                 7.2.0
pip                    19.2.3
portalocker            2.0.0
protobuf               3.17.3
psutil                 5.8.0
pyasn1                 0.4.8
pyasn1-modules         0.2.8
pybind11               2.7.1
pycocotools            2.0.2
pyonmttok              1.18.3
pyparsing              2.4.7
pyter3                 0.3
pytest-runner          5.3.1
python-dateutil        2.8.2
PyYAML                 5.3.1
requests               2.76.0
requests-oauthlib      1.3.0
rouge                  1.0.1
rsa                    4.7.2
sacrebleu              1.5.1
scipy                  1.4.1
sentencepiece          0.1.96
setuptools             41.2.0
six                    1.16.0
tensorboard            2.7.2
tensorboard-plugin-wit 1.8.0
tensorflow             2.7.0
tensorflow-addons      0.10.0
tensorflow-estimator   2.7.0
tensorflow-hub         0.12.0
termcolor              1.1.0
tf-slim                1.1.0
typeguard              2.12.1
urllib3                1.26.6
Werkzeug               2.0.1
wheel                  0.36.2
wrapt                  1.12.1

Copyright

Copyright RIKEN, Japan 2021-2022
Copyright FUJITSU LIMITED 2021-2022

Clone this wiki locally