You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/using_tf.rst
+3-13Lines changed: 3 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -443,20 +443,10 @@ After a TensorFlow estimator has been fit, it saves a TensorFlow SavedModel in
443
443
the S3 location defined by ``output_path``. You can call ``deploy`` on a TensorFlow
444
444
estimator to create a SageMaker Endpoint.
445
445
446
-
SageMaker provides two different options for deploying TensorFlow models to a SageMaker
447
-
Endpoint:
446
+
Your model will be deployed to a TensorFlow Serving-based server. The server provides a super-set of the
447
+
`TensorFlow Serving REST API <https://www.tensorflow.org/serving/api_rest>`_.
448
448
449
-
- The first option uses a Python-based server that allows you to specify your own custom
450
-
input and output handling functions in a Python script. This is the default option.
451
-
452
-
See `Deploying to Python-based Endpoints <https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_python.rst>`_ to learn how to use this option.
453
-
454
-
455
-
- The second option uses a TensorFlow Serving-based server to provide a super-set of the
456
-
`TensorFlow Serving REST API <https://www.tensorflow.org/serving/api_rest>`_. This option
457
-
does not require (or allow) a custom python script.
458
-
459
-
See `Deploying to TensorFlow Serving Endpoints <https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst>`_ to learn how to use this option.
449
+
See `Deploying to TensorFlow Serving Endpoints <https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst>`_ to learn how to deploy your model and make inference requests.
0 commit comments