Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rekognition Update models to latest #3381

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -883,9 +883,9 @@ CreateCollectionResult createCollection(CreateCollectionRequest createCollection
* an existing Amazon Rekognition Custom Labels dataset.
* </p>
* <p>
* To create a training dataset for a project, specify <code>train</code>
* To create a training dataset for a project, specify <code>TRAIN</code>
* for the value of <code>DatasetType</code>. To create the test dataset for
* a project, specify <code>test</code> for the value of
* a project, specify <code>TEST</code> for the value of
* <code>DatasetType</code>.
* </p>
* <p>
Expand Down Expand Up @@ -943,12 +943,18 @@ CreateDatasetResult createDataset(CreateDatasetRequest createDatasetRequest)
* <p>
* This API operation initiates a Face Liveness session. It returns a
* <code>SessionId</code>, which you can use to start streaming Face
* Liveness video and get the results for a Face Liveness session. You can
* use the <code>OutputConfig</code> option in the Settings parameter to
* provide an Amazon S3 bucket location. The Amazon S3 bucket stores
* reference images and audit images. You can use
* <code>AuditImagesLimit</code> to limit the number of audit images
* returned. This number is between 0 and 4. By default, it is set to 0. The
* Liveness video and get the results for a Face Liveness session.
* </p>
* <p>
* You can use the <code>OutputConfig</code> option in the Settings
* parameter to provide an Amazon S3 bucket location. The Amazon S3 bucket
* stores reference images and audit images. If no Amazon S3 bucket is
* defined, raw bytes are sent instead.
* </p>
* <p>
* You can use <code>AuditImagesLimit</code> to limit the number of audit
* images returned when <code>GetFaceLivenessSessionResults</code> is
* called. This number is between 0 and 4. By default, it is set to 0. The
* limit is best effort and based on the duration of the selfie-video.
* </p>
*
Expand Down Expand Up @@ -1819,10 +1825,12 @@ DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
* >Detecting Labels in an Image</a>.
* </p>
* <p>
* You can specify <code>MinConfidence</code> to control the confidence
* threshold for the labels returned. The default is 55%. You can also add
* the <code>MaxLabels</code> parameter to limit the number of labels
* returned. The default and upper limit is 1000 labels.
* When getting labels, you can specify <code>MinConfidence</code> to
* control the confidence threshold for the labels returned. The default is
* 55%. You can also add the <code>MaxLabels</code> parameter to limit the
* number of labels returned. The default and upper limit is 1000 labels.
* These arguments are only valid when supplying GENERAL_LABELS as a feature
* type.
* </p>
* <p>
* <b>Response Elements</b>
Expand Down Expand Up @@ -2513,6 +2521,11 @@ GetContentModerationResult getContentModeration(
* <code>NextToken</code> request parameter with the token value returned
* from the previous call to <code>GetFaceDetection</code>.
* </p>
* <p>
* Note that for the <code>GetFaceDetection</code> operation, the returned
* values for <code>FaceOccluded</code> and <code>EyeDirection</code> will
* always be "null".
* </p>
*
* @param getFaceDetectionRequest
* @return getFaceDetectionResult The response from the GetFaceDetection
Expand Down Expand Up @@ -2542,8 +2555,14 @@ GetFaceDetectionResult getFaceDetection(GetFaceDetectionRequest getFaceDetection
* <code>CreateFaceLivenessSession</code>. Returns the corresponding Face
* Liveness confidence score, a reference image that includes a face
* bounding box, and audit images that also contain face bounding boxes. The
* Face Liveness confidence score ranges from 0 to 100. The reference image
* can optionally be returned.
* Face Liveness confidence score ranges from 0 to 100.
* </p>
* <p>
* The number of audit images returned by
* <code>GetFaceLivenessSessionResults</code> is defined by the
* <code>AuditImagesLimit</code> paramater when calling
* <code>CreateFaceLivenessSession</code>. Reference images are always
* returned when possible.
* </p>
*
* @param getFaceLivenessSessionResultsRequest
Expand Down Expand Up @@ -2937,7 +2956,7 @@ GetSegmentDetectionResult getSegmentDetection(
* <p>
* <code>GetTextDetection</code> returns an array of detected text (
* <code>TextDetections</code>) sorted by the time the text was detected, up
* to 50 words per frame of video.
* to 100 words per frame of video.
* </p>
* <p>
* Each element of the array includes the detected text, the precentage
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1269,9 +1269,9 @@ public CreateCollectionResult createCollection(CreateCollectionRequest createCol
* an existing Amazon Rekognition Custom Labels dataset.
* </p>
* <p>
* To create a training dataset for a project, specify <code>train</code>
* To create a training dataset for a project, specify <code>TRAIN</code>
* for the value of <code>DatasetType</code>. To create the test dataset for
* a project, specify <code>test</code> for the value of
* a project, specify <code>TEST</code> for the value of
* <code>DatasetType</code>.
* </p>
* <p>
Expand Down Expand Up @@ -1355,12 +1355,18 @@ public CreateDatasetResult createDataset(CreateDatasetRequest createDatasetReque
* <p>
* This API operation initiates a Face Liveness session. It returns a
* <code>SessionId</code>, which you can use to start streaming Face
* Liveness video and get the results for a Face Liveness session. You can
* use the <code>OutputConfig</code> option in the Settings parameter to
* provide an Amazon S3 bucket location. The Amazon S3 bucket stores
* reference images and audit images. You can use
* <code>AuditImagesLimit</code> to limit the number of audit images
* returned. This number is between 0 and 4. By default, it is set to 0. The
* Liveness video and get the results for a Face Liveness session.
* </p>
* <p>
* You can use the <code>OutputConfig</code> option in the Settings
* parameter to provide an Amazon S3 bucket location. The Amazon S3 bucket
* stores reference images and audit images. If no Amazon S3 bucket is
* defined, raw bytes are sent instead.
* </p>
* <p>
* You can use <code>AuditImagesLimit</code> to limit the number of audit
* images returned when <code>GetFaceLivenessSessionResults</code> is
* called. This number is between 0 and 4. By default, it is set to 0. The
* limit is best effort and based on the duration of the selfie-video.
* </p>
*
Expand Down Expand Up @@ -2763,10 +2769,12 @@ public DetectFacesResult detectFaces(DetectFacesRequest detectFacesRequest)
* >Detecting Labels in an Image</a>.
* </p>
* <p>
* You can specify <code>MinConfidence</code> to control the confidence
* threshold for the labels returned. The default is 55%. You can also add
* the <code>MaxLabels</code> parameter to limit the number of labels
* returned. The default and upper limit is 1000 labels.
* When getting labels, you can specify <code>MinConfidence</code> to
* control the confidence threshold for the labels returned. The default is
* 55%. You can also add the <code>MaxLabels</code> parameter to limit the
* number of labels returned. The default and upper limit is 1000 labels.
* These arguments are only valid when supplying GENERAL_LABELS as a feature
* type.
* </p>
* <p>
* <b>Response Elements</b>
Expand Down Expand Up @@ -3698,6 +3706,11 @@ public GetContentModerationResult getContentModeration(
* <code>NextToken</code> request parameter with the token value returned
* from the previous call to <code>GetFaceDetection</code>.
* </p>
* <p>
* Note that for the <code>GetFaceDetection</code> operation, the returned
* values for <code>FaceOccluded</code> and <code>EyeDirection</code> will
* always be "null".
* </p>
*
* @param getFaceDetectionRequest
* @return getFaceDetectionResult The response from the GetFaceDetection
Expand Down Expand Up @@ -3753,8 +3766,14 @@ public GetFaceDetectionResult getFaceDetection(GetFaceDetectionRequest getFaceDe
* <code>CreateFaceLivenessSession</code>. Returns the corresponding Face
* Liveness confidence score, a reference image that includes a face
* bounding box, and audit images that also contain face bounding boxes. The
* Face Liveness confidence score ranges from 0 to 100. The reference image
* can optionally be returned.
* Face Liveness confidence score ranges from 0 to 100.
* </p>
* <p>
* The number of audit images returned by
* <code>GetFaceLivenessSessionResults</code> is defined by the
* <code>AuditImagesLimit</code> paramater when calling
* <code>CreateFaceLivenessSession</code>. Reference images are always
* returned when possible.
* </p>
*
* @param getFaceLivenessSessionResultsRequest
Expand Down Expand Up @@ -4284,7 +4303,7 @@ public GetSegmentDetectionResult getSegmentDetection(
* <p>
* <code>GetTextDetection</code> returns an array of detected text (
* <code>TextDetections</code>) sorted by the time the text was detected, up
* to 50 words per frame of video.
* to 100 words per frame of video.
* </p>
* <p>
* Each element of the array includes the detected text, the precentage
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@
* existing Amazon Rekognition Custom Labels dataset.
* </p>
* <p>
* To create a training dataset for a project, specify <code>train</code> for
* To create a training dataset for a project, specify <code>TRAIN</code> for
* the value of <code>DatasetType</code>. To create the test dataset for a
* project, specify <code>test</code> for the value of <code>DatasetType</code>.
* project, specify <code>TEST</code> for the value of <code>DatasetType</code>.
* </p>
* <p>
* The response from <code>CreateDataset</code> is the Amazon Resource Name
Expand Down Expand Up @@ -71,8 +71,8 @@ public class CreateDatasetRequest extends AmazonWebServiceRequest implements Ser

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* <b>Constraints:</b><br/>
Expand Down Expand Up @@ -168,16 +168,16 @@ public CreateDatasetRequest withDatasetSource(DatasetSource datasetSource) {

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* <b>Constraints:</b><br/>
* <b>Allowed Values: </b>TRAIN, TEST
*
* @return <p>
* The type of the dataset. Specify <code>train</code> to create a
* training dataset. Specify <code>test</code> to create a test
* The type of the dataset. Specify <code>TRAIN</code> to create a
* training dataset. Specify <code>TEST</code> to create a test
* dataset.
* </p>
* @see DatasetType
Expand All @@ -188,16 +188,16 @@ public String getDatasetType() {

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* <b>Constraints:</b><br/>
* <b>Allowed Values: </b>TRAIN, TEST
*
* @param datasetType <p>
* The type of the dataset. Specify <code>train</code> to create
* a training dataset. Specify <code>test</code> to create a test
* The type of the dataset. Specify <code>TRAIN</code> to create
* a training dataset. Specify <code>TEST</code> to create a test
* dataset.
* </p>
* @see DatasetType
Expand All @@ -208,8 +208,8 @@ public void setDatasetType(String datasetType) {

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* Returns a reference to this object so that method calls can be chained
Expand All @@ -219,8 +219,8 @@ public void setDatasetType(String datasetType) {
* <b>Allowed Values: </b>TRAIN, TEST
*
* @param datasetType <p>
* The type of the dataset. Specify <code>train</code> to create
* a training dataset. Specify <code>test</code> to create a test
* The type of the dataset. Specify <code>TRAIN</code> to create
* a training dataset. Specify <code>TEST</code> to create a test
* dataset.
* </p>
* @return A reference to this updated object so that method calls can be
Expand All @@ -234,16 +234,16 @@ public CreateDatasetRequest withDatasetType(String datasetType) {

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* <b>Constraints:</b><br/>
* <b>Allowed Values: </b>TRAIN, TEST
*
* @param datasetType <p>
* The type of the dataset. Specify <code>train</code> to create
* a training dataset. Specify <code>test</code> to create a test
* The type of the dataset. Specify <code>TRAIN</code> to create
* a training dataset. Specify <code>TEST</code> to create a test
* dataset.
* </p>
* @see DatasetType
Expand All @@ -254,8 +254,8 @@ public void setDatasetType(DatasetType datasetType) {

/**
* <p>
* The type of the dataset. Specify <code>train</code> to create a training
* dataset. Specify <code>test</code> to create a test dataset.
* The type of the dataset. Specify <code>TRAIN</code> to create a training
* dataset. Specify <code>TEST</code> to create a test dataset.
* </p>
* <p>
* Returns a reference to this object so that method calls can be chained
Expand All @@ -265,8 +265,8 @@ public void setDatasetType(DatasetType datasetType) {
* <b>Allowed Values: </b>TRAIN, TEST
*
* @param datasetType <p>
* The type of the dataset. Specify <code>train</code> to create
* a training dataset. Specify <code>test</code> to create a test
* The type of the dataset. Specify <code>TRAIN</code> to create
* a training dataset. Specify <code>TEST</code> to create a test
* dataset.
* </p>
* @return A reference to this updated object so that method calls can be
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,19 @@
* <p>
* This API operation initiates a Face Liveness session. It returns a
* <code>SessionId</code>, which you can use to start streaming Face Liveness
* video and get the results for a Face Liveness session. You can use the
* <code>OutputConfig</code> option in the Settings parameter to provide an
* Amazon S3 bucket location. The Amazon S3 bucket stores reference images and
* audit images. You can use <code>AuditImagesLimit</code> to limit the number
* of audit images returned. This number is between 0 and 4. By default, it is
* set to 0. The limit is best effort and based on the duration of the
* selfie-video.
* video and get the results for a Face Liveness session.
* </p>
* <p>
* You can use the <code>OutputConfig</code> option in the Settings parameter to
* provide an Amazon S3 bucket location. The Amazon S3 bucket stores reference
* images and audit images. If no Amazon S3 bucket is defined, raw bytes are
* sent instead.
* </p>
* <p>
* You can use <code>AuditImagesLimit</code> to limit the number of audit images
* returned when <code>GetFaceLivenessSessionResults</code> is called. This
* number is between 0 and 4. By default, it is set to 0. The limit is best
* effort and based on the duration of the selfie-video.
* </p>
*/
public class CreateFaceLivenessSessionRequest extends AmazonWebServiceRequest implements
Expand Down
Loading