Achieving Zero-Static-Secret Architecture
Harbor is widely recognized as the CNCF-graduated standard for open-source container registries. It is powerful, feature-rich, and trusted by thousands of organizations. However, its default AWS integration relies on a legacy pattern that modern security teams increasingly reject: Static Secrets.
In strictly governed AWS environments, storing long-lived credentials in Kubernetes Secrets represents a “Secret Zero” vulnerability. In this post, I share how I modernized Harbor’s authentication layer to use AWS RDS IAM Authentication and IAM Roles for Service Accounts (IRSA), shifting security from a manual burden to an automated guarantee.
We have all seen this in our clusters: a secret containing a long-lived AWS_ACCESS_KEY_ID for S3 access, or a hardcoded master password for a database connection string.

Before (Legacy Flow): The system relies on static passwords passed via config strings both for RDS and S3 access, creating significant rotation and leakage risks.
While functional, this approach requires manual key rotation and managing complex secret lifecycles. If these secrets are compromised, your entire artifact storage backend is exposed.
When we investigated modernizing this flow, we identified two primary technical gaps in the upstream Harbor project:
AssumeRoleWithWebIdentity, meaning they couldn’t exchange a Kubernetes ServiceAccount token for AWS temporary credentials.We refactored Harbor to leverage ephemeral identity. By patching the core Go codebase and upgrading the internal distribution engine to v3, we enabled a completely keyless architecture.

After (The Modern Flow): Harbor components dynamically assume roles and request ephemeral tokens from AWS STS, removing the need for static credentials entirely.
Harbor’s core components connect to PostgreSQL using the pgx driver. By default, this driver expects a static password. We refactored the connection logic in src/common/dao/pgsql.go, but a significant challenge emerged during implementation: IAM tokens expire every 15 minutes.
Standard connection pools establish a connection at startup, but once that initial token expires, any new connection attempt causes the application to crash.
I solved this by implementing a beforeConnectHook in the pgx driver. This ensures the application requests a fresh cryptographic token from AWS every time a new connection is established in the pool.
// src/common/dao/pgsql.go
// Define the Hook Function to handle ephemeral token refreshing
beforeConnectHook := func(ctx context.Context, cfg *pgx.ConnConfig) error {
// 1. Request a fresh, signed token from AWS RDS Utilities
token, err := getIAMToken(p.host, p.port, p.usr, region)
if err != nil {
log.Errorf("IAM Auth: Failed to generate token: %v", err)
return err
}
// 2. Inject the temporary token as the connection password
cfg.Password = token
log.Debugf("IAM Auth: Token refreshed for new connection to %s", cfg.Host)
return nil
}
// 3. Open the DB using the Option pattern to attach the hook
sqlDB := stdlib.OpenDB(*config, stdlib.OptionBeforeConnect(beforeConnectHook))

Full sequence: How the Harbor pod creates a ServiceAccount, assumes the IAM role via IRSA, and refreshes RDS auth tokens on every connection cycle using the BeforeConnect hook.
For S3 access, the Registry binary relies on the upstream docker/distribution. To enable IAM Roles for Service Accounts (IRSA) where a Pod inherits permissions from an AWS IAM Role, we upgraded the build process to use the modern distribution/distribution:v3 libraries.
This upgrade allows the S3 storage driver to automatically detect the AWS_WEB_IDENTITY_TOKEN_FILE projected by Kubernetes, removing the need to define accesskey and secretkey in the Helm values.
You can deploy this hardened version of Harbor today using our verified artifacts and custom images.
We have hosted the patched images and the modern OCI Helm chart in our public registry:
# Pull the images
docker pull 8gears.container-registry.com/8gcr/harbor-jobservice
docker pull 8gears.container-registry.com/8gcr/harbor-core
docker pull 8gears.container-registry.com/8gcr/harbor-registry
# Pull the Helm Chart
helm pull oci://8gears.container-registry.com/8gcr/harbor --version 3.0.0
Before deploying Harbor, we need to provision the cloud resources. This includes an OIDC-enabled EKS cluster, an S3 bucket for artifact storage, and a PostgreSQL instance with IAM authentication enabled.
export AWS_REGION="us-east-1"
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export CLUSTER_NAME="harbor-on-aws-natively-cluster"
export POLICY_NAME="HarborOnAwsNativePolicy"
export BUCKET_NAME="harbor-on-aws-natively-store"
export SA_NAME="harbor-sa"
export NAMESPACE="harbor"
export DB_NAME="registry"
export DB_USER="harbor_iam_user"
export DB_INSTANCE_ID="harbor-db"
export DB_CLASS="db.t3.medium"
eksctl create cluster \
--name $CLUSTER_NAME \
--region $AWS_REGION \
--version 1.30 \
--with-oidc \
--managed \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 4
aws s3 mb "s3://$BUCKET_NAME" --region $AWS_REGION
aws iam create-policy \
--policy-name $POLICY_NAME \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::'"$BUCKET_NAME"'",
"arn:aws:s3:::'"$BUCKET_NAME"'/*"
]
},
{
"Effect": "Allow",
"Action": ["rds-db:connect"],
"Resource": [
"arn:aws:rds-db:'"$AWS_REGION"':'"$AWS_ACCOUNT_ID"':dbuser:*/'"$DB_USER"'"
]
}
]
}'
eksctl create iamserviceaccount \
--cluster=$CLUSTER_NAME \
--name=$SA_NAME \
--namespace=$NAMESPACE \
--attach-policy-arn="arn:aws:iam::$AWS_ACCOUNT_ID:policy/$POLICY_NAME" \
--approve
We provision a PostgreSQL instance with IAM Database Authentication enabled (--enable-iam-database-authentication).
# Get EKS Network Information
export EKS_VPC_ID=$(aws eks describe-cluster \
--name $CLUSTER_NAME \
--region $AWS_REGION \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
export EKS_CIDR=$(aws ec2 describe-vpcs \
--vpc-ids $EKS_VPC_ID \
--region $AWS_REGION \
--query "Vpcs[0].CidrBlock" \
--output text)
export SUBNET_IDS=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$EKS_VPC_ID" \
--region $AWS_REGION \
--query "Subnets[*].SubnetId" \
--output text)
echo "VPC ID: $EKS_VPC_ID"
echo "CIDR: $EKS_CIDR"
# Create Security Group
export DB_SG_ID=$(aws ec2 create-security-group \
--group-name harbor-db-sg \
--description "Security group for Harbor RDS" \
--vpc-id $EKS_VPC_ID \
--output text --query 'GroupId' --region $AWS_REGION)
aws ec2 authorize-security-group-ingress \
--group-id $DB_SG_ID \
--protocol tcp \
--port 5432 \
--cidr $EKS_CIDR \
--region $AWS_REGION
# Create DB Subnet Group
aws rds create-db-subnet-group \
--db-subnet-group-name harbor-native-subnets \
--db-subnet-group-description "Subnets for Harbor RDS" \
--subnet-ids $SUBNET_IDS \
--region $AWS_REGION
# Create RDS Instance
aws rds create-db-instance \
--db-instance-identifier $DB_INSTANCE_ID \
--db-instance-class $DB_CLASS \
--engine postgres \
--engine-version 18.1 \
--master-username harbor_admin \
--master-user-password "<yourPassword>" \
--allocated-storage 20 \
--db-name $DB_NAME \
--enable-iam-database-authentication \
--vpc-security-group-ids $DB_SG_ID \
--db-subnet-group-name harbor-native-subnets \
--backup-retention-period 7 \
--no-publicly-accessible \
--region $AWS_REGION
echo "Waiting for RDS (5-10 minutes)..."
aws rds wait db-instance-available \
--db-instance-identifier $DB_INSTANCE_ID \
--region $AWS_REGION
# Configure IAM Database User
export DB_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier $DB_INSTANCE_ID \
--region $AWS_REGION \
--query "DBInstances[0].Endpoint.Address" \
--output text)
echo "Database Endpoint: $DB_ENDPOINT"
kubectl create namespace $NAMESPACE
# Connect to RDS (Note: the master password is only needed for this one-time setup.
# Consider using AWS Secrets Manager for the master password in production.)
kubectl run postgres-client --rm -it --image=postgres:18 --restart=Never --namespace=$NAMESPACE --env=PGPASSWORD=<yourPassword> -- psql -h $DB_ENDPOINT -U harbor_admin -d $DB_NAME
Once connected, run the following SQL commands inside PostgreSQL:
CREATE USER harbor_iam_user WITH LOGIN;
GRANT rds_iam TO harbor_iam_user;
GRANT ALL PRIVILEGES ON DATABASE registry TO harbor_iam_user;
GRANT ALL ON SCHEMA public TO harbor_iam_user;
\q
values-aws-native.yamlWe configure Harbor to use native AWS authentication. Note that HARBOR_DATABASE_IAM_AUTH is explicitly enabled, the password field is left as a dummy value (it will be ignored by our hook), and the storage credential fields are left empty. The registry inherits permissions directly from the ServiceAccount via IRSA.
# ============================================================
# HARBOR AWS NATIVE CONFIGURATION
# Features: RDS IAM Auth + S3 IRSA
# ============================================================
# 1. GLOBAL SETTINGS
externalURL: "https://harbor.test"
# 2. CONFIGURATION & IAM AUTH
core:
replicas: 1
image:
repository: 8gears.container-registry.com/8gcr/harbor-core
tag: latest
# SERVICE ACCOUNT (Required for IRSA)
serviceAccount:
create: false
name: "harbor-sa" # This SA must be annotated with your AWS Role ARN
securityContext:
readOnlyRootFilesystem: false
config:
HARBOR_DATABASE_IAM_AUTH: "true"
POSTGRES_HOST: "<YOUR_DB_ENDPOINT>"
POSTGRES_PORT: "5432"
POSTGRES_USER: "harbor_iam_user"
POSTGRES_DATABASE: "registry"
# --- JOBSERVICE ---
jobservice:
replicas: 1
image:
repository: 8gears.container-registry.com/8gcr/harbor-jobservice
tag: latest
serviceAccount:
create: false
name: "harbor-sa"
securityContext:
readOnlyRootFilesystem: false
config:
HARBOR_DATABASE_IAM_AUTH: "true"
# --- REGISTRY ---
registry:
replicas: 1
image:
repository: 8gears.container-registry.com/8gcr/harbor-registry
tag: latest
serviceAccount:
create: false
name: "harbor-sa"
relativeurls: true
persistence:
enabled: false
securityContext:
readOnlyRootFilesystem: false
env:
- name: REGISTRY_STORAGE_CACHE_LAYERINFO
value: "inmemory"
- name: AWS_REGION
value: "<YOUR_AWS_REGION>"
storage:
type: s3
s3:
region: "<YOUR_AWS_REGION>"
bucket: "<YOUR_BUCKET_NAME>"
secure: true
v4auth: true
# No static keys required! The driver uses the pod role via IRSA.
accesskey: ""
secretkey: ""
# 3. DATABASE (RDS IAM Auth)
database:
host: "<YOUR_DB_ENDPOINT>"
port: 5432
username: "harbor_iam_user"
password: "dummy_password" # Required by the Helm chart schema but ignored at runtime; the BeforeConnect hook replaces it with an IAM token
database: "registry"
sslmode: "require"
helm upgrade --install my-harbor oci://8gears.container-registry.com/8gcr/harbor \
--version 3.0.0 \
--namespace harbor \
-f values-aws-native.yaml
kubectl -n harbor get pods
kubectl -n harbor logs -l app=harbor-core --tail=50
Confirm all pods reach Running state. In the core logs, look for IAM Auth: Token refreshed messages to verify that RDS IAM authentication is active.
Modernizing Harbor to embrace AWS native identity isn’t just about refactoring code, it’s about shifting security from a manual burden to an automated guarantee.
By replacing static, long-lived secrets with ephemeral, auto-rotating tokens via RDS IAM and IRSA, we empower platform engineers to meet strict enterprise compliance standards without the operational toil. This architecture sets a new benchmark for running Harbor on EKS, ensuring your registry is as secure as the infrastructure it runs on. Ultimately, it allows your team to stop managing keys and start focusing on what matters: delivering software.
8gears Container Registry is a Harbor-based container registry as a service. You can start free and go up to any scale with our flexible plans.
Published — February 19, 2026