Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
7c8fc23
updates
ericl Aug 25, 2020
8bdb733
lint
ericl Aug 25, 2020
c0a2412
cleanup init
ericl Aug 25, 2020
2a34c30
wip
ericl Aug 25, 2020
757f9c8
fix tests
ericl Aug 25, 2020
953599b
fix gpu ids crash
ericl Aug 25, 2020
2db799c
update
ericl Aug 25, 2020
03ba127
fix show in webui
ericl Aug 25, 2020
14957fd
lint
ericl Aug 25, 2020
5b4daea
fix tune
ericl Aug 25, 2020
1065757
remove
ericl Aug 25, 2020
e80bb27
fix click
ericl Aug 25, 2020
9f4089e
update
ericl Aug 25, 2020
cf5a7c8
fix
ericl Aug 25, 2020
5c53167
lint
ericl Aug 26, 2020
dcc9504
remove projects
ericl Aug 26, 2020
fde672d
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 26, 2020
306e347
lint
ericl Aug 26, 2020
cc7d102
updates
ericl Aug 26, 2020
80a9218
remove project rst
ericl Aug 26, 2020
502ef3a
fix spelling
ericl Aug 26, 2020
96acfca
remove
ericl Aug 26, 2020
1816d0e
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 26, 2020
08e4b12
fix
ericl Aug 26, 2020
e3b0cd2
rever
ericl Aug 26, 2020
41fefa9
fix
ericl Aug 26, 2020
47bf9a1
fix
ericl Aug 26, 2020
239579c
update
ericl Aug 27, 2020
4e53a6a
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 27, 2020
83ecb1c
ffs
ericl Aug 27, 2020
156b6e4
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 27, 2020
4d62b34
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 27, 2020
e0976cb
fix
ericl Aug 27, 2020
e59415a
update timeout
ericl Aug 28, 2020
a0cac86
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 28, 2020
07f8bb2
lint
ericl Aug 28, 2020
d49c2f0
failures
ericl Aug 28, 2020
b04948b
Merge remote-tracking branch 'upstream/master' into ray-1.0
ericl Aug 28, 2020
c106385
update
ericl Aug 28, 2020
f64c72b
remove last weakref
ericl Aug 28, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -1783,11 +1783,6 @@ filegroup(
"python/ray/experimental/*.py",
"python/ray/util/*.py",
"python/ray/internal/*.py",
"python/ray/projects/*.py",
"python/ray/projects/schema.json",
"python/ray/projects/templates/cluster_template.yaml",
"python/ray/projects/templates/project_template.yaml",
"python/ray/projects/templates/requirements.txt",
"python/ray/workers/default_worker.py",
]),
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def optimizer_creator(model, config):
momentum=config.get("momentum", 0.9))


ray.init(address="auto" if not args.smoke_test else None, log_to_driver=True)
ray.init(address="auto" if not args.smoke_test else None, _log_to_driver=True)
num_training_workers = 1 if args.smoke_test else 3

executor = FailureInjectorExecutor(queue_trials=True)
Expand Down
3 changes: 2 additions & 1 deletion ci/long_running_tests/workloads/serve.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,8 @@
@serve.accept_batch
def echo(_):
time.sleep(0.01) # Sleep for 10ms
ray.show_in_webui(str(serve.context.batch_size), key="Current batch size")
ray.show_in_dashboard(
str(serve.context.batch_size), key="Current batch size")
return ["hi {}".format(i) for i in range(serve.context.batch_size)]


Expand Down
2 changes: 1 addition & 1 deletion ci/long_running_tests/workloads/serve_failure.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
dashboard_host="0.0.0.0")

ray.init(
address=cluster.address, dashboard_host="0.0.0.0", log_to_driver=False)
address=cluster.address, dashboard_host="0.0.0.0", _log_to_driver=False)
serve.init()


Expand Down
13 changes: 6 additions & 7 deletions ci/travis/ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,6 @@ test_python() {
-python/ray/tests:test_multiprocessing # test_connect_to_ray() fails to connect to raylet
-python/ray/tests:test_node_manager
-python/ray/tests:test_object_manager
-python/ray/tests:test_projects
-python/ray/tests:test_ray_init # test_redis_port() seems to fail here, but pass in isolation
-python/ray/tests:test_resource_demand_scheduler
-python/ray/tests:test_stress # timeout
Expand Down Expand Up @@ -279,12 +278,12 @@ build_wheels() {
# caused timeouts in the past. See the "cache: false" line below.
local MOUNT_BAZEL_CACHE=(
-v "${HOME}/ray-bazel-cache":/root/ray-bazel-cache
-e TRAVIS=true
-e TRAVIS_PULL_REQUEST="${TRAVIS_PULL_REQUEST:-false}"
-e encrypted_1c30b31fe1ee_key="${encrypted_1c30b31fe1ee_key-}"
-e encrypted_1c30b31fe1ee_iv="${encrypted_1c30b31fe1ee_iv-}"
-e TRAVIS_COMMIT="${TRAVIS_COMMIT}"
-e CI="${CI}"
-e "TRAVIS=true"
-e "TRAVIS_PULL_REQUEST=${TRAVIS_PULL_REQUEST:-false}"
-e "encrypted_1c30b31fe1ee_key=${encrypted_1c30b31fe1ee_key-}"
-e "encrypted_1c30b31fe1ee_iv=${encrypted_1c30b31fe1ee_iv-}"
-e "TRAVIS_COMMIT=${TRAVIS_COMMIT}"
-e "CI=${CI}"
)

# This command should be kept in sync with ray/python/README-building-wheels.md,
Expand Down
2 changes: 1 addition & 1 deletion doc/source/actors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Resources with Actors
You can specify that an actor requires CPUs or GPUs in the decorator. While Ray has built-in support for CPUs and GPUs, Ray can also handle custom resources.

When using GPUs, Ray will automatically set the environment variable ``CUDA_VISIBLE_DEVICES`` for the actor after instantiated. The actor will have access to a list of the IDs of the GPUs
that it is allowed to use via ``ray.get_gpu_ids(as_str=True)``. This is a list of strings,
that it is allowed to use via ``ray.get_gpu_ids()``. This is a list of strings,
like ``[]``, or ``['1']``, or ``['2', '5', '6']``. Under some circumstances, the IDs of GPUs could be given as UUID strings instead of indices (see the `CUDA programming guide <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars>`__).

.. code-block:: python
Expand Down
46 changes: 1 addition & 45 deletions doc/source/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ To add authentication via the Python API, start Ray using:

.. code-block:: python

ray.init(redis_password="password")
ray.init(_redis_password="password")

To add authentication via the CLI or to connect to an existing Ray instance with
password-protected Redis ports:
Expand All @@ -182,48 +182,4 @@ to localhost when the ray is started using ``ray.init``.
See the `Redis security documentation <https://redis.io/topics/security>`__
for more information.


Using the Object Store with Huge Pages
--------------------------------------

Plasma is a high-performance shared memory object store originally developed in
Ray and now being developed in `Apache Arrow`_. See the `relevant
documentation`_.

On Linux, it is possible to increase the write throughput of the Plasma object
store by using huge pages. You first need to create a file system and activate
huge pages as follows.

.. code-block:: shell

sudo mkdir -p /mnt/hugepages
gid=`id -g`
uid=`id -u`
sudo mount -t hugetlbfs -o uid=$uid -o gid=$gid none /mnt/hugepages
sudo bash -c "echo $gid > /proc/sys/vm/hugetlb_shm_group"
# This typically corresponds to 20000 2MB pages (about 40GB), but this
# depends on the platform.
sudo bash -c "echo 20000 > /proc/sys/vm/nr_hugepages"

**Note:** Once you create the huge pages, they will take up memory which will
never be freed unless you remove the huge pages. If you run into memory issues,
that may be the issue.

You need root access to create the file system, but not for running the object
store.

You can then start Ray with huge pages on a single machine as follows.

.. code-block:: python

ray.init(huge_pages=True, plasma_directory="/mnt/hugepages")

In the cluster case, you can do it by passing ``--huge-pages`` and
``--plasma-directory=/mnt/hugepages`` into ``ray start`` on any machines where
huge pages should be enabled.

See the relevant `Arrow documentation for huge pages`_.

.. _`Apache Arrow`: https://arrow.apache.org/
.. _`relevant documentation`: https://arrow.apache.org/docs/python/plasma.html#the-plasma-in-memory-object-store
.. _`Arrow documentation for huge pages`: https://arrow.apache.org/docs/python/plasma.html#using-plasma-with-huge-pages
1 change: 0 additions & 1 deletion doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,6 @@ Academic Papers
joblib.rst
iter.rst
pandas_on_ray.rst
projects.rst

.. toctree::
:hidden:
Expand Down
49 changes: 4 additions & 45 deletions doc/source/package-ref.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,12 @@ ray.get_resource_ids

.. autofunction:: ray.get_resource_ids

.. _ray-get_webui_url-ref:
.. _ray-get_dashboard_url-ref:

ray.get_webui_url
~~~~~~~~~~~~~~~~~
ray.get_dashboard_url
~~~~~~~~~~~~~~~~~~~~~

.. autofunction:: ray.get_webui_url
.. autofunction:: ray.get_dashboard_url

.. _ray-shutdown-ref:

Expand All @@ -88,21 +88,6 @@ ray.shutdown

.. autofunction:: ray.shutdown


.. _ray-register_custom_serializer-ref:

ray.register_custom_serializer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autofunction:: ray.register_custom_serializer

.. _ray-profile-ref:

ray.profile
~~~~~~~~~~~

.. autofunction:: ray.profile

.. _ray-method-ref:

ray.method
Expand All @@ -123,27 +108,13 @@ ray.nodes

.. autofunction:: ray.nodes

.. _ray-objects-ref:

ray.objects
~~~~~~~~~~~

.. autofunction:: ray.objects

.. _ray-timeline-ref:

ray.timeline
~~~~~~~~~~~~

.. autofunction:: ray.timeline

.. _ray-object_transfer_timeline-ref:

ray.object_transfer_timeline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autofunction:: ray.object_transfer_timeline

.. _ray-cluster_resources-ref:

ray.cluster_resources
Expand Down Expand Up @@ -221,24 +192,12 @@ The Ray Command Line API
:prog: ray stack
:show-nested:

.. _ray-stat-doc:

.. click:: ray.scripts.scripts:statistics
:prog: ray statistics
:show-nested:

.. _ray-memory-doc:

.. click:: ray.scripts.scripts:memory
:prog: ray memory
:show-nested:

.. _ray-globalgc-doc:

.. click:: ray.scripts.scripts:globalgc
:prog: ray globalgc
:show-nested:

.. _ray-timeline-doc:

.. click:: ray.scripts.scripts:timeline
Expand Down
Loading