Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF2 environment settings #5230

Merged
merged 33 commits into from
Feb 17, 2020
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
9d63670
implemented first level of tensorflow configuration
dakshvar22 Feb 7, 2020
58dd5fc
Merge branch 'tf2' into tf2-tfconfig
dakshvar22 Feb 12, 2020
0f8b8a5
added test and refactored env variables
dakshvar22 Feb 12, 2020
3deda46
remove extra lines
dakshvar22 Feb 12, 2020
75663a0
added changelog
dakshvar22 Feb 12, 2020
1ba52e0
removed unref variable
dakshvar22 Feb 12, 2020
9008960
add types
dakshvar22 Feb 12, 2020
6f96fa5
Merge branch 'tf2' into tf2-tfconfig
dakshvar22 Feb 12, 2020
5ec8d3e
remove old tests
dakshvar22 Feb 12, 2020
142327a
added docs
dakshvar22 Feb 12, 2020
ddae7ae
refactored tests
dakshvar22 Feb 13, 2020
c875933
WIP
dakshvar22 Feb 13, 2020
0295023
check
dakshvar22 Feb 13, 2020
2843fe0
fix test for cpu testing
dakshvar22 Feb 13, 2020
4d12daf
Merge branch 'tf2' into tf2-tfconfig
dakshvar22 Feb 13, 2020
ef527c0
address comments
dakshvar22 Feb 13, 2020
879bd0d
fix type annotations
dakshvar22 Feb 13, 2020
0fec427
fix imports
dakshvar22 Feb 13, 2020
5a88cd5
test just one pytest file on travis
dakshvar22 Feb 14, 2020
6f28218
added all tests to pytest now
dakshvar22 Feb 14, 2020
af6f037
trying to fix the test. Multiprocessing :(
dakshvar22 Feb 14, 2020
0881af0
Merge branch 'tf2' into tf2-tfconfig
dakshvar22 Feb 17, 2020
98c43b2
remove cpu env set test
dakshvar22 Feb 17, 2020
d970ae9
remove unused imports
dakshvar22 Feb 17, 2020
a96159b
refactor function
dakshvar22 Feb 17, 2020
6146bf9
Apply suggestions from code review
dakshvar22 Feb 17, 2020
99a3fe2
Apply suggestions from code review
dakshvar22 Feb 17, 2020
14b6ea5
changes from code comments
dakshvar22 Feb 17, 2020
98d54a9
add docstring
dakshvar22 Feb 17, 2020
69bd8f2
Update rasa/utils/tensorflow/environment.py
dakshvar22 Feb 17, 2020
eb4fce6
Apply suggestions from code review
dakshvar22 Feb 17, 2020
8950359
shorten docstring
dakshvar22 Feb 17, 2020
f168ce9
fix import
dakshvar22 Feb 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions changelog/5230.feature.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Refactor how GPU and CPU environments are configured for TensorFlow 2.0

Please refer to the `documentation <https://rasa.com/docs/rasa/api/tensorflow_usage.html>`_ to understand
which environment variables to set in what scenarios. A couple of examples are shown below as well:

.. code-block:: python

# This specifies to use 1024 MB of memory from GPU with logical ID 0 and 2048 MB of memory from GPU with logical ID 1
TF_GPU_MEMORY_ALLOC="0:1024, 1:2048"

# Specifies that atmost 3 CPU threads can be used to parallelize multiple non-blocking operations
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
TF_INTER_OP_PARALLELISM_THREADS="3"

# Specifies that atmost 2 CPU threads can be used to parallelize a particular operation.
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
TF_INTRA_OP_PARALLELISM_THREADS="2"

55 changes: 55 additions & 0 deletions docs/api/tensorflow_usage.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
:desc: Find out how to configure your environment for efficient usage of TensorFlow inside Rasa Open Source

.. _tensorflow_usage:

Setting up the TensorFlow Runtime
=================================

TensorFlow allows setting the runtime environment via
`TF Config submodule <https://www.tensorflow.org/api_docs/python/tf/config>`_. Rasa Open Source supports a smaller subset of these
configuration options and makes appropriate calls to the ``tf.config`` submodule.
This smaller subset comprises of configurations that developers frequently use with Rasa Open Source.
All configuration options are specified using environment variables as shown in subsequent sections.

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
Optimizing CPU Performance
--------------------------

Parallelizing one operation
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Set ``TF_INTRA_OP_PARALLELISM_THREADS`` as an environment variable to specify maximum number of threads that can be used
to parallelize the execution of one operation. If left unspecified, this value defaults to ``0`` which means TensorFlow should
pick an appropriate value depending on the system configuration.


dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
Parallelizing multiple operations
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Set ``TF_INTER_OP_PARALLELISM_THREADS`` as an environment variable to specify maximum number of threads that can be used
to parallelize the execution of multiple **non-blocking** operations. If left unspecified, this value defaults to ``0``
which means TensorFlow should pick an appropriate value depending on the system configuration.

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved

Optimizing GPU Performance
--------------------------

Limiting GPU memory growth
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
^^^^^^^^^^^^^^^^^^^^^^^^^^

TensorFlow by default blocks all the available GPU memory for the running process. This can be limiting if you are running
multiple TensorFlow processes and want to distribute memory across them. To prevent this,
set an environment variable ``TF_FORCE_GPU_ALLOW_GROWTH`` to ``True``.
wochinge marked this conversation as resolved.
Show resolved Hide resolved


dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
Restricting absolute GPU memory available
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Often, a developer wants to limit the absolute amount of GPU memory that can be used by a process.

For example, you may have two visible GPUs(``GPU:0`` and ``GPU:1``) and you want to allocate 1024 MB from the first GPU
and 2048 MB from the second GPU.
You can do so by setting an environment variable as ``TF_GPU_MEMORY_ALLOC="0:1024, 1:2048"``.

Another scenario can be where you have access to 2 GPUs(``GPU:0`` and ``GPU:1``) but you would like to use only the second
GPU.
``TF_GPU_MEMORY_ALLOC="1:2048"`` would make 2048 MB of memory availble from GPU 1.
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ Understand messages, hold conversations, and connect to messaging channels and A
api/lock-stores
api/training-data-importers
api/core-featurization
api/tensorflow_usage
migration-guide
changelog

Expand Down
3 changes: 3 additions & 0 deletions rasa/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from rasa.cli.arguments.default_arguments import add_logging_options
from rasa.cli.utils import parse_last_positional_argument_as_model_path
from rasa.utils.common import set_log_level
import rasa.utils.tensorflow.environment as tf_env

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -68,6 +69,8 @@ def main() -> None:
)
set_log_level(log_level)

tf_env.setup_tf_environment()

# insert current path in syspath so custom modules are found
sys.path.insert(1, os.getcwd())

Expand Down
4 changes: 4 additions & 0 deletions rasa/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,7 @@

DEFAULT_SESSION_EXPIRATION_TIME_IN_MINUTES = 60
DEFAULT_CARRY_OVER_SLOTS_TO_NEW_SESSION = True

ENV_GPU_CONFIG = "TF_GPU_MEMORY_ALLOC"
wochinge marked this conversation as resolved.
Show resolved Hide resolved
ENV_CPU_INTER_OP_CONFIG = "TF_INTER_OP_PARALLELISM_THREADS"
ENV_CPU_INTRA_OP_CONFIG = "TF_INTRA_OP_PARALLELISM_THREADS"
129 changes: 129 additions & 0 deletions rasa/utils/tensorflow/environment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
import logging
wochinge marked this conversation as resolved.
Show resolved Hide resolved
import os
from typing import Text, Dict
import typing
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved

if typing.TYPE_CHECKING:
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
from tensorflow import config as tf_config

import rasa.utils.common as rasa_utils
from rasa.constants import (
ENV_GPU_CONFIG,
ENV_CPU_INTER_OP_CONFIG,
ENV_CPU_INTRA_OP_CONFIG,
)

logger = logging.getLogger(__name__)


def setup_gpu_environment() -> None:
"""Set configuration for a GPU environment based on the environment variable set"""
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
gpu_memory_config = os.getenv(ENV_GPU_CONFIG)
if gpu_memory_config:
wochinge marked this conversation as resolved.
Show resolved Hide resolved

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
# Import from tensorflow only if necessary(environment variable was set)
from tensorflow import config as tf_config

parsed_gpu_config = parse_gpu_config(gpu_memory_config)
physical_gpus = tf_config.list_physical_devices("GPU")

# Logic taken from https://www.tensorflow.org/guide/gpu
if physical_gpus:
for gpu_id, gpu_id_memory in parsed_gpu_config.items():

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
allocate_gpu_memory(physical_gpus[gpu_id], gpu_id_memory)

else:
rasa_utils.raise_warning(
f"You have an environment variable '{ENV_GPU_CONFIG}' set but no GPUs were detected to configure"
)


def allocate_gpu_memory(
gpu_instance: "tf_config.PhysicalDevice", logical_memory: int
) -> None:
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
wochinge marked this conversation as resolved.
Show resolved Hide resolved

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
from tensorflow import config as tf_config

try:
tf_config.experimental.set_virtual_device_configuration(
gpu_instance,
[
tf_config.experimental.VirtualDeviceConfiguration(
memory_limit=logical_memory
)
],
)

except RuntimeError:
# Add a helper explanation where the error comes from
wochinge marked this conversation as resolved.
Show resolved Hide resolved
raise RuntimeError(
"Error while setting up tensorflow environment. "
"Virtual devices must be set before GPUs have been initialized"
)


def parse_gpu_config(gpu_memory_config: Text) -> Dict[int, int]:
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
"""Parse GPU configuration variable from a string to a dict"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please describe the parameters when adding a docstring (I know it's not like that in other parts of the code, but we have to get consistent with our docstring and with our coding conventions)


# gpu_config is of format "gpu_id_1:gpu_id_1_memory, gpu_id_2: gpu_id_2_memory"
# Parse it and store in a dictionary
parsed_gpu_config = {}

try:
for instance in gpu_memory_config.split(","):
instance_gpu_id, instance_gpu_mem = instance.split(":")
instance_gpu_id = int(instance_gpu_id)
instance_gpu_mem = int(instance_gpu_mem)

parsed_gpu_config[instance_gpu_id] = instance_gpu_mem
except ValueError:
# Add a helper explanation
wochinge marked this conversation as resolved.
Show resolved Hide resolved
raise ValueError(
f"Error parsing GPU configuration. Please cross-check the format of '{ENV_GPU_CONFIG}'"
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
)

return parsed_gpu_config


def setup_cpu_environment() -> None:
"""Set configuration for the CPU environment based on the environment variable set"""

inter_op_parallel_threads = os.getenv(ENV_CPU_INTER_OP_CONFIG)
intra_op_parallel_threads = os.getenv(ENV_CPU_INTRA_OP_CONFIG)

if not inter_op_parallel_threads and not intra_op_parallel_threads:
return

from tensorflow import config as tf_config

if inter_op_parallel_threads:

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
try:
inter_op_parallel_threads = int(inter_op_parallel_threads.strip())
except ValueError:
raise ValueError(
f"Error parsing the environment variable '{ENV_CPU_INTER_OP_CONFIG}'. Please "
f"cross-check the value"
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
)

tf_config.threading.set_inter_op_parallelism_threads(inter_op_parallel_threads)

if intra_op_parallel_threads:

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
try:
intra_op_parallel_threads = int(intra_op_parallel_threads.strip())
except ValueError:
raise ValueError(
f"Error parsing the environment variable '{ENV_CPU_INTRA_OP_CONFIG}'. Please "
f"cross-check the value"
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
)

tf_config.threading.set_intra_op_parallelism_threads(intra_op_parallel_threads)


dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
def setup_tf_environment() -> None:
dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
setup_cpu_environment()
setup_gpu_environment()
9 changes: 0 additions & 9 deletions rasa/utils/train_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,6 @@
logger = logging.getLogger(__name__)


def load_tf_config(config: Dict[Text, Any]) -> Optional[tf.compat.v1.ConfigProto]:
"""Prepare `tf.compat.v1.ConfigProto` for training"""

if config.get("tf_config") is not None:
return tf.compat.v1.ConfigProto(**config.pop("tf_config"))
else:
return None


def normalize(values: np.ndarray, ranking_length: Optional[int] = 0) -> np.ndarray:
"""Normalizes an array of positive numbers over the top `ranking_length` values.
Other values will be set to 0.
Expand Down
73 changes: 0 additions & 73 deletions tests/core/test_policies.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,35 +52,6 @@
from tests.core.utilities import get_tracker, read_dialogue_file, user_uttered


def tf_defaults():
return {
"tf_config": {
"device_count": {"CPU": 4},
# tell tf.Session to use CPU limit, if you have
# more CPU, you can increase this value appropriately
"inter_op_parallelism_threads": 0,
# the number of threads in the thread pool available
# for each process for blocking operation nodes set to 0
# to allow the system to select the appropriate value.
"intra_op_parallelism_threads": 0, # tells the degree of thread
# parallelism of the tf.Session operation.
# the smaller the value, the less reuse the thread will have
# and the more likely it will use more CPU cores.
# if the value is 0,
# tensorflow will automatically select an appropriate value.
"gpu_options": {"allow_growth": True}
# if set True, will try to allocate
# as much GPU memory as possible to support running
}
}


def session_config():
import tensorflow as tf

return tf.ConfigProto(**tf_defaults()["tf_config"])


async def train_trackers(domain, augmentation_factor=20):
return await training.load_data(
DEFAULT_STORIES_FILE, domain, augmentation_factor=augmentation_factor
Expand Down Expand Up @@ -185,18 +156,6 @@ def test_persist_and_load_empty_policy(self, tmpdir):
loaded = empty_policy.__class__.load(tmpdir.strpath)
assert loaded is not None

# TODO test tf config
# def test_tf_config(self, trained_policy, tmpdir):
# if hasattr(trained_policy, "session"):
# import tensorflow as tf
#
# # noinspection PyProtectedMember
# assert trained_policy.session._config == tf.Session()._config
# trained_policy.persist(tmpdir.strpath)
# loaded = trained_policy.__class__.load(tmpdir.strpath)
# # noinspection PyProtectedMember
# assert loaded.session._config == tf.Session()._config

@staticmethod
def _get_next_action(policy, events, domain):
tracker = get_tracker(events)
Expand All @@ -212,22 +171,6 @@ def create_policy(self, featurizer, priority):
return p


class TestKerasPolicyWithTfConfig(PolicyTestCollection):
def create_policy(self, featurizer, priority):
p = KerasPolicy(featurizer, priority, **tf_defaults())
return p

# TODO fix and test tf config
@pytest.mark.skip(reason="We need to fix tf.config!")
def test_tf_config(self, trained_policy, tmpdir):
# noinspection PyProtectedMember
assert trained_policy.session._config == session_config()
trained_policy.persist(tmpdir.strpath)
loaded = trained_policy.__class__.load(tmpdir.strpath)
# noinspection PyProtectedMember
assert loaded.session._config == session_config()


class TestSklearnPolicy(PolicyTestCollection):
def create_policy(self, featurizer, priority, **kwargs):
p = SklearnPolicy(featurizer, priority, **kwargs)
Expand Down Expand Up @@ -529,22 +472,6 @@ def create_policy(self, featurizer, priority):
return p


class TestTEDPolicyWithTfConfig(TestTEDPolicy):
def create_policy(self, featurizer, priority):
p = TEDPolicy(featurizer=featurizer, priority=priority, **tf_defaults())
return p

# TODO test tf config
@pytest.mark.skip(reason="Fix tf config.")
def test_tf_config(self, trained_policy, tmpdir):
# noinspection PyProtectedMember
assert trained_policy.session._config == session_config()
trained_policy.persist(tmpdir.strpath)
loaded = trained_policy.__class__.load(tmpdir.strpath)
# noinspection PyProtectedMember
assert loaded.session._config == session_config()


class TestMemoizationPolicy(PolicyTestCollection):
def create_policy(self, featurizer, priority):
max_history = None
Expand Down
11 changes: 11 additions & 0 deletions tests/utils/test_tf_environment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
import pytest
wochinge marked this conversation as resolved.
Show resolved Hide resolved
from rasa.utils.tensorflow.environment import parse_gpu_config

wochinge marked this conversation as resolved.
Show resolved Hide resolved

@pytest.mark.parametrize(
"gpu_config_string, parsed_gpu_config",
[("0: 1024", {0: 1024}), ("0:1024, 1:2048", {0: 1024, 1: 2048})],
)
def test_gpu_config_parser(gpu_config_string, parsed_gpu_config):

dakshvar22 marked this conversation as resolved.
Show resolved Hide resolved
assert parse_gpu_config(gpu_config_string) == parsed_gpu_config