Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rekognition Update models to latest #3279

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1539,8 +1539,8 @@ DetectCustomLabelsResult detectCustomLabels(DetectCustomLabelsRequest detectCust
* each face detected, the operation returns face details. These details
* include a bounding box of the face, a confidence value (that the bounding
* box contains a face), and a fixed set of attributes such as facial
* landmarks (for example, coordinates of eye and mouth), presence of beard,
* sunglasses, and so on.
* landmarks (for example, coordinates of eye and mouth), pose, presence of
* facial occlusion, and so on.
* </p>
* <p>
* The face-detection algorithm is most effective on frontal faces. For
Expand Down Expand Up @@ -2889,13 +2889,16 @@ GetTextDetectionResult getTextDetection(GetTextDetectionRequest getTextDetection
* </li>
* </ul>
* <p>
* If you request all facial attributes (by using the
* <code>detectionAttributes</code> parameter), Amazon Rekognition returns
* detailed facial attributes, such as facial landmarks (for example,
* location of eye and mouth) and other facial attributes. If you provide
* the same image, specify the same collection, and use the same external ID
* in the <code>IndexFaces</code> operation, Amazon Rekognition doesn't save
* duplicate face metadata.
* If you request <code>ALL</code> or specific facial attributes (e.g.,
* <code>FACE_OCCLUDED</code>) by using the detectionAttributes parameter,
* Amazon Rekognition returns detailed facial attributes, such as facial
* landmarks (for example, location of eye and mouth), facial occlusion, and
* other facial attributes.
* </p>
* <p>
* If you provide the same image, specify the same collection, and use the
* same external ID in the <code>IndexFaces</code> operation, Amazon
* Rekognition doesn't save duplicate face metadata.
* </p>
* <p/>
* <p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2378,8 +2378,8 @@ public DetectCustomLabelsResult detectCustomLabels(
* each face detected, the operation returns face details. These details
* include a bounding box of the face, a confidence value (that the bounding
* box contains a face), and a fixed set of attributes such as facial
* landmarks (for example, coordinates of eye and mouth), presence of beard,
* sunglasses, and so on.
* landmarks (for example, coordinates of eye and mouth), pose, presence of
* facial occlusion, and so on.
* </p>
* <p>
* The face-detection algorithm is most effective on frontal faces. For
Expand Down Expand Up @@ -4155,13 +4155,16 @@ public GetTextDetectionResult getTextDetection(GetTextDetectionRequest getTextDe
* </li>
* </ul>
* <p>
* If you request all facial attributes (by using the
* <code>detectionAttributes</code> parameter), Amazon Rekognition returns
* detailed facial attributes, such as facial landmarks (for example,
* location of eye and mouth) and other facial attributes. If you provide
* the same image, specify the same collection, and use the same external ID
* in the <code>IndexFaces</code> operation, Amazon Rekognition doesn't save
* duplicate face metadata.
* If you request <code>ALL</code> or specific facial attributes (e.g.,
* <code>FACE_OCCLUDED</code>) by using the detectionAttributes parameter,
* Amazon Rekognition returns detailed facial attributes, such as facial
* landmarks (for example, location of eye and mouth), facial occlusion, and
* other facial attributes.
* </p>
* <p>
* If you provide the same image, specify the same collection, and use the
* same external ID in the <code>IndexFaces</code> operation, Amazon
* Rekognition doesn't save duplicate face metadata.
* </p>
* <p/>
* <p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,18 @@
public enum Attribute {

DEFAULT("DEFAULT"),
ALL("ALL");
ALL("ALL"),
AGE_RANGE("AGE_RANGE"),
BEARD("BEARD"),
EMOTIONS("EMOTIONS"),
EYEGLASSES("EYEGLASSES"),
EYES_OPEN("EYES_OPEN"),
GENDER("GENDER"),
MOUTH_OPEN("MOUTH_OPEN"),
MUSTACHE("MUSTACHE"),
FACE_OCCLUDED("FACE_OCCLUDED"),
SMILE("SMILE"),
SUNGLASSES("SUNGLASSES");

private String value;

Expand All @@ -42,6 +53,17 @@ public String toString() {
enumMap = new HashMap<String, Attribute>();
enumMap.put("DEFAULT", DEFAULT);
enumMap.put("ALL", ALL);
enumMap.put("AGE_RANGE", AGE_RANGE);
enumMap.put("BEARD", BEARD);
enumMap.put("EMOTIONS", EMOTIONS);
enumMap.put("EYEGLASSES", EYEGLASSES);
enumMap.put("EYES_OPEN", EYES_OPEN);
enumMap.put("GENDER", GENDER);
enumMap.put("MOUTH_OPEN", MOUTH_OPEN);
enumMap.put("MUSTACHE", MUSTACHE);
enumMap.put("FACE_OCCLUDED", FACE_OCCLUDED);
enumMap.put("SMILE", SMILE);
enumMap.put("SUNGLASSES", SUNGLASSES);
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ public class CreateFaceLivenessSessionRequestSettings implements Serializable {
* audit images will be stored. Note that the Amazon S3 bucket must be
* located in the caller's AWS account and in the same region as the Face
* Liveness end-point. Additionally, the Amazon S3 object keys are
* auto-generated by the Face Liveness system.
* auto-generated by the Face Liveness system. Requires that the caller has
* the <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
*/
private LivenessOutputConfig outputConfig;
Expand All @@ -54,7 +55,8 @@ public class CreateFaceLivenessSessionRequestSettings implements Serializable {
* audit images will be stored. Note that the Amazon S3 bucket must be
* located in the caller's AWS account and in the same region as the Face
* Liveness end-point. Additionally, the Amazon S3 object keys are
* auto-generated by the Face Liveness system.
* auto-generated by the Face Liveness system. Requires that the caller has
* the <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
*
* @return <p>
Expand All @@ -63,6 +65,8 @@ public class CreateFaceLivenessSessionRequestSettings implements Serializable {
* must be located in the caller's AWS account and in the same
* region as the Face Liveness end-point. Additionally, the Amazon
* S3 object keys are auto-generated by the Face Liveness system.
* Requires that the caller has the <code>s3:PutObject</code>
* permission on the Amazon S3 bucket.
* </p>
*/
public LivenessOutputConfig getOutputConfig() {
Expand All @@ -75,7 +79,8 @@ public LivenessOutputConfig getOutputConfig() {
* audit images will be stored. Note that the Amazon S3 bucket must be
* located in the caller's AWS account and in the same region as the Face
* Liveness end-point. Additionally, the Amazon S3 object keys are
* auto-generated by the Face Liveness system.
* auto-generated by the Face Liveness system. Requires that the caller has
* the <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
*
* @param outputConfig <p>
Expand All @@ -84,7 +89,8 @@ public LivenessOutputConfig getOutputConfig() {
* Amazon S3 bucket must be located in the caller's AWS account
* and in the same region as the Face Liveness end-point.
* Additionally, the Amazon S3 object keys are auto-generated by
* the Face Liveness system.
* the Face Liveness system. Requires that the caller has the
* <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
*/
public void setOutputConfig(LivenessOutputConfig outputConfig) {
Expand All @@ -97,7 +103,8 @@ public void setOutputConfig(LivenessOutputConfig outputConfig) {
* audit images will be stored. Note that the Amazon S3 bucket must be
* located in the caller's AWS account and in the same region as the Face
* Liveness end-point. Additionally, the Amazon S3 object keys are
* auto-generated by the Face Liveness system.
* auto-generated by the Face Liveness system. Requires that the caller has
* the <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
* <p>
* Returns a reference to this object so that method calls can be chained
Expand All @@ -109,7 +116,8 @@ public void setOutputConfig(LivenessOutputConfig outputConfig) {
* Amazon S3 bucket must be located in the caller's AWS account
* and in the same region as the Face Liveness end-point.
* Additionally, the Amazon S3 object keys are auto-generated by
* the Face Liveness system.
* the Face Liveness system. Requires that the caller has the
* <code>s3:PutObject</code> permission on the Amazon S3 bucket.
* </p>
* @return A reference to this updated object so that method calls can be
* chained together.
Expand Down
Loading