Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .eslintrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
"node/no-unsupported-features/es-syntax": ["off"]
},
"parserOptions": {
"ecmaVersion": 2020,
"sourceType": "module"
}
}
6 changes: 6 additions & 0 deletions auth/.eslintrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"extends": "../.eslintrc.json",
"rules": {
"no-unused-vars": "off"
}
}
2 changes: 1 addition & 1 deletion auth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ information](https://developers.google.com/identity/protocols/application-defaul

For more information on downscoped credentials you can visit:

> https://github.com/googleapis/google-auth-library-nodejs
> https://github.com/googleapis/google-auth-library-nodejs
15 changes: 15 additions & 0 deletions auth/customcredentials/aws/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
FROM node:20-slim

WORKDIR /app

COPY package*.json ./

RUN npm install --omit=dev

RUN useradd -m appuser

COPY --chown=appuser:appuser . .

USER appuser

CMD [ "node", "customCredentialSupplierAws.js" ]
121 changes: 121 additions & 0 deletions auth/customcredentials/aws/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Running the Custom AWS Credential Supplier Sample (Node.js)

This sample demonstrates how to use a custom AWS security credential supplier to authenticate with Google Cloud using AWS as an external identity provider. It uses the **AWS SDK for JavaScript (v3)** to fetch credentials from sources like Amazon Elastic Kubernetes Service (EKS) with IAM Roles for Service Accounts (IRSA), Elastic Container Service (ECS), or Fargate.

## Prerequisites

* An AWS account.
* A Google Cloud project with the IAM API enabled.
* A GCS bucket.
* **Node.js 16** or later installed.
* **npm** installed.

If you want to use AWS security credentials that cannot be retrieved using methods supported natively by the Google Auth library, a custom `AwsSecurityCredentialsSupplier` implementation may be specified. The supplier must return valid, unexpired AWS security credentials when called by the Google Cloud Auth library.

## Running Locally

For local development, you can provide credentials and configuration in a JSON file.

### Install Dependencies

Ensure you have Node.js installed, then install the required libraries:

```bash
npm install
```

### Configure Credentials for Local Development

1. Copy the example secrets file to a new file named `custom-credentials-aws-secrets.json` in the project root:
```bash
cp custom-credentials-aws-secrets.json.example custom-credentials-aws-secrets.json
```
2. Open `custom-credentials-aws-secrets.json` and fill in the required values for your AWS and Google Cloud configuration. Do not check your `custom-credentials-aws-secrets.json` file into version control.


### Run the Application

Execute the script using node:

```bash
node customCredentialSupplierAws.js
```

When run locally, the application will detect the `custom-credentials-aws-secrets.json` file and use it to configure the necessary environment variables for the AWS SDK.

## Running in a Containerized Environment (EKS)

This section provides a brief overview of how to run the sample in an Amazon EKS cluster.

### EKS Cluster Setup

First, you need an EKS cluster. You can create one using `eksctl` or the AWS Management Console. For detailed instructions, refer to the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html).

### Configure IAM Roles for Service Accounts (IRSA)

IRSA enables you to associate an IAM role with a Kubernetes service account. This provides a secure way for your pods to access AWS services without hardcoding long-lived credentials.

Run the following command to create the IAM role and bind it to a Kubernetes Service Account:

```bash
eksctl create iamserviceaccount \
--name your-k8s-service-account \
--namespace default \
--cluster your-cluster-name \
--region your-aws-region \
--role-name your-role-name \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
```

> **Note**: The `--attach-policy-arn` flag is used here to demonstrate attaching permissions. Update this with the specific AWS policy ARN your application requires.

For a deep dive into how this works without using `eksctl`, refer to the [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) documentation.

### Configure Google Cloud to Trust the AWS Role

To allow your AWS role to authenticate as a Google Cloud service account, you need to configure Workload Identity Federation. This process involves these key steps:

1. **Create a Workload Identity Pool and an AWS Provider:** The pool holds the configuration, and the provider is set up to trust your AWS account.

2. **Create or select a Google Cloud Service Account:** This service account will be impersonated by your AWS role.

3. **Bind the AWS Role to the Google Cloud Service Account:** Create an IAM policy binding that gives your AWS role the `Workload Identity User` (`roles/iam.workloadIdentityUser`) role on the Google Cloud service account.

For more detailed information, see the documentation on [Configuring Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds).

### Containerize and Package the Application

Create a `Dockerfile` for the Node.js application and push the image to a container registry (for example Amazon ECR) that your EKS cluster can access.

**Note:** The provided [`Dockerfile`](Dockerfile) is an example and may need modification for your specific needs.

Build and push the image:
```bash
docker build -t your-container-image:latest .
docker push your-container-image:latest
```

### Deploy to EKS

Create a Kubernetes deployment manifest to deploy your application to the EKS cluster. See the [`pod.yaml`](pod.yaml) file for an example.

**Note:** The provided [`pod.yaml`](pod.yaml) is an example and may need to be modified for your specific needs.

Deploy the pod:

```bash
kubectl apply -f pod.yaml
```

### Clean Up

To clean up the resources, delete the EKS cluster and any other AWS and Google Cloud resources you created.

```bash
eksctl delete cluster --name your-cluster-name
```

## Testing

This sample is not continuously tested. It is provided for instructional purposes and may require modifications to work in your environment.
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"aws_access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
"aws_secret_access_key": "YOUR_AWS_SECRET_ACCESS_KEY",
"aws_region": "YOUR_AWS_REGION",
"gcp_workload_audience": "YOUR_GCP_WORKLOAD_AUDIENCE",
"gcs_bucket_name": "YOUR_GCS_BUCKET_NAME",
"gcp_service_account_impersonation_url": "YOUR_GCP_SERVICE_ACCOUNT_IMPERSONATION_URL"
}
184 changes: 184 additions & 0 deletions auth/customcredentials/aws/customCredentialSupplierAws.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

// [START auth_custom_credential_supplier_aws]
const {AwsClient} = require('google-auth-library');
const {fromNodeProviderChain} = require('@aws-sdk/credential-providers');
const fs = require('fs');
const path = require('path');
const {STSClient} = require('@aws-sdk/client-sts');
const {Storage} = require('@google-cloud/storage');

/**
* Custom AWS Security Credentials Supplier.
*
* This implementation resolves AWS credentials using the default Node provider
* chain from the AWS SDK. This allows fetching credentials from environment
* variables, shared credential files (~/.aws/credentials), or IAM roles
* for service accounts (IRSA) in EKS, etc.
*/
class CustomAwsSupplier {
constructor() {
this.region = null;

this.awsCredentialsProvider = fromNodeProviderChain();
}

/**
* Returns the AWS region. This is required for signing the AWS request.
* It resolves the region automatically by using the default AWS region
* provider chain, which searches for the region in the standard locations
* (environment variables, AWS config file, etc.).
*/
async getAwsRegion(_context) {
if (this.region) {
return this.region;
}

const client = new STSClient({});
this.region = await client.config.region();

if (!this.region) {
throw new Error(
'CustomAwsSupplier: Unable to resolve AWS region. Please set the AWS_REGION environment variable or configure it in your ~/.aws/config file.'
);
}

return this.region;
}

/**
* Retrieves AWS security credentials using the AWS SDK's default provider chain.
*/
async getAwsSecurityCredentials(_context) {
const awsCredentials = await this.awsCredentialsProvider();

if (!awsCredentials.accessKeyId || !awsCredentials.secretAccessKey) {
throw new Error(
'Unable to resolve AWS credentials from the node provider chain. ' +
'Ensure your AWS CLI is configured, or AWS environment variables (like AWS_ACCESS_KEY_ID) are set.'
);
}

return {
accessKeyId: awsCredentials.accessKeyId,
secretAccessKey: awsCredentials.secretAccessKey,
token: awsCredentials.sessionToken,
};
}
}

/**
* Authenticates with Google Cloud using AWS credentials and retrieves bucket metadata.
*
* @param {string} bucketName The name of the bucket to retrieve.
* @param {string} audience The Workload Identity Pool audience.
* @param {string} [impersonationUrl] Optional Service Account impersonation URL.
*/
async function authenticateWithAwsCredentials(
bucketName,
audience,
impersonationUrl
) {
const customSupplier = new CustomAwsSupplier();

const clientOptions = {
audience: audience,
subject_token_type: 'urn:ietf:params:aws:token-type:aws4_request',
service_account_impersonation_url: impersonationUrl,
aws_security_credentials_supplier: customSupplier,
};

const authClient = new AwsClient(clientOptions);

const storage = new Storage({
authClient: authClient,
});

const [metadata] = await storage.bucket(bucketName).getMetadata();
return metadata;
}
// [END auth_custom_credential_supplier_aws]

/**
* If a local secrets file is present, load it into the process environment.
* This is a "just-in-time" configuration for local development. These
* variables are only set for the current process.
*/
function loadConfigFromFile() {
const secretsPath = path.resolve(
__dirname,
'custom-credentials-aws-secrets.json'
);
if (!fs.existsSync(secretsPath)) return;

try {
const secrets = JSON.parse(fs.readFileSync(secretsPath, 'utf8'));

const envMap = {
aws_access_key_id: 'AWS_ACCESS_KEY_ID',
aws_secret_access_key: 'AWS_SECRET_ACCESS_KEY',
aws_region: 'AWS_REGION',
gcp_workload_audience: 'GCP_WORKLOAD_AUDIENCE',
gcs_bucket_name: 'GCS_BUCKET_NAME',
gcp_service_account_impersonation_url:
'GCP_SERVICE_ACCOUNT_IMPERSONATION_URL',
};

for (const [jsonKey, envKey] of Object.entries(envMap)) {
if (secrets[jsonKey]) {
process.env[envKey] = secrets[jsonKey];
}
}
} catch (error) {
console.error(`Error reading secrets file: ${error.message}`);
}
}

async function main() {
loadConfigFromFile();

const gcpAudience = process.env.GCP_WORKLOAD_AUDIENCE;
const saImpersonationUrl = process.env.GCP_SERVICE_ACCOUNT_IMPERSONATION_URL;
const gcsBucketName = process.env.GCS_BUCKET_NAME;

if (!gcpAudience || !gcsBucketName) {
throw new Error(
'Missing required configuration. Please provide it in a ' +
'secrets.json file or as environment variables: ' +
'GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME'
);
}

try {
console.log(`Retrieving metadata for bucket: ${gcsBucketName}...`);
const bucketMetadata = await authenticateWithAwsCredentials(
gcsBucketName,
gcpAudience,
saImpersonationUrl
);
console.log('\n--- SUCCESS! ---');
console.log('Bucket Metadata:', JSON.stringify(bucketMetadata, null, 2));
} catch (error) {
console.error('\n--- FAILED ---');
console.error(error.message || error);
process.exitCode = 1;
}
}

if (require.main === module) {
main();
}

exports.authenticateWithAwsCredentials = authenticateWithAwsCredentials;
44 changes: 44 additions & 0 deletions auth/customcredentials/aws/pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Copyright 2025 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Pod
metadata:
name: custom-credential-pod-node
spec:
# The Kubernetes Service Account that is annotated with the corresponding
# AWS IAM role ARN. See the README for instructions on setting up IAM
# Roles for Service Accounts (IRSA).
serviceAccountName: your-k8s-service-account
containers:
- name: gcp-auth-sample-node
# The container image pushed to the container registry
# For example, Amazon Elastic Container Registry
image: your-container-image:latest
env:
# REQUIRED: The AWS region. The AWS SDK for Node.js requires this
# to be set explicitly in containers.
- name: AWS_REGION
value: "your-aws-region"

# REQUIRED: The full identifier of the Workload Identity Pool provider
- name: GCP_WORKLOAD_AUDIENCE
value: "your-gcp-workload-audience"

# OPTIONAL: Enable Google Cloud service account impersonation
# - name: GCP_SERVICE_ACCOUNT_IMPERSONATION_URL
# value: "your-gcp-service-account-impersonation-url"

# REQUIRED: The bucket to list
- name: GCS_BUCKET_NAME
value: "your-gcs-bucket-name"
Loading
Loading