Skip to content

Cloud Service Integration — IRSA, Workload Identity & More

Cloud Service Integration — Where This Fits You are the central platform team. Your tenant teams deploy microservices on EKS/GKE that need to call cloud APIs — read from S3/GCS, query RDS/Cloud SQL, fetch secrets. The old approach (mounting static credentials as Kubernetes Secrets) is a security disaster. This page covers the modern way: identity federation from pods to cloud IAM.


1. The Problem: Pods Need Cloud Credentials

Section titled “1. The Problem: Pods Need Cloud Credentials”

Static Credentials — What Goes Wrong


2. AWS IRSA — IAM Roles for Service Accounts

Section titled “2. AWS IRSA — IAM Roles for Service Accounts”

IRSA Flow — How Pods Access AWS Services

IRSA lets a Kubernetes ServiceAccount assume an IAM Role without static credentials. It uses OIDC federation — the EKS cluster is an OIDC identity provider, and STS trusts tokens it issues.

IRSA Authentication Flow Detailed IRSA Token Flow

// Projected SA token (decoded) — this is what STS validates
{
"aud": ["sts.amazonaws.com"],
"exp": 1740000000,
"iat": 1739913600,
"iss": "https://oidc.eks.us-east-1.amazonaws.com/id/ABC123DEF456",
"kubernetes.io": {
"namespace": "payments",
"pod": {
"name": "payment-processor-7d8b9c-x4k2m",
"uid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
},
"serviceaccount": {
"name": "payment-sa",
"uid": "f1e2d3c4-b5a6-7890-fedc-ba0987654321"
}
},
"sub": "system:serviceaccount:payments:payment-sa"
}
# Step 1: OIDC provider (created once per cluster)
# If using terraform-aws-modules/eks, this is automatic
data "tls_certificate" "eks" {
url = aws_eks_cluster.main.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "eks" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.main.identity[0].oidc[0].issuer
}
# Step 2: IAM Role with OIDC trust policy
resource "aws_iam_role" "payment_processor" {
name = "${var.cluster_name}-payment-processor"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.eks.arn
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
# Lock to specific service account in specific namespace
"${replace(aws_eks_cluster.main.identity[0].oidc[0].issuer, "https://", "")}:sub" = "system:serviceaccount:payments:payment-sa"
"${replace(aws_eks_cluster.main.identity[0].oidc[0].issuer, "https://", "")}:aud" = "sts.amazonaws.com"
}
}
}
]
})
}
# Step 3: Attach permissions
resource "aws_iam_role_policy" "payment_s3_access" {
name = "payment-s3-access"
role = aws_iam_role.payment_processor.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject"
]
Resource = "arn:aws:s3:::bank-payment-documents/*"
},
{
Effect = "Allow"
Action = [
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:DeleteMessage"
]
Resource = "arn:aws:sqs:us-east-1:111111111111:payment-events"
}
]
})
}
# Step 4: Output the role ARN for Kubernetes SA annotation
output "payment_processor_role_arn" {
value = aws_iam_role.payment_processor.arn
}
# Step 5: Kubernetes ServiceAccount (annotated with role ARN)
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-sa
namespace: payments
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/prod-eks-payment-processor
---
# Step 6: Pod using the ServiceAccount
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
spec:
replicas: 3
selector:
matchLabels:
app: payment-processor
template:
metadata:
labels:
app: payment-processor
spec:
serviceAccountName: payment-sa # Uses the IRSA-annotated SA
containers:
- name: payment-processor
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/payment-processor:v2.1
# AWS SDK automatically picks up:
# - AWS_ROLE_ARN (injected by webhook)
# - AWS_WEB_IDENTITY_TOKEN_FILE (injected by webhook)
# No env vars or secrets needed!
# The community module makes IRSA setup much simpler
module "payment_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "5.39.0"
role_name = "payment-processor"
role_policy_arns = {
s3 = aws_iam_policy.payment_s3.arn
sqs = aws_iam_policy.payment_sqs.arn
}
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["payments:payment-sa"]
}
}
}

3. EKS Pod Identity — The Simpler Alternative

Section titled “3. EKS Pod Identity — The Simpler Alternative”

EKS Pod Identity (GA since late 2023) simplifies IRSA by removing the need for OIDC provider management. It uses the EKS Pod Identity Agent instead.

IRSA vs Pod Identity:

  • IRSA (2019): Pod -> projected JWT -> STS AssumeRoleWithWebIdentity -> temp creds. Setup: OIDC provider + trust policy with OIDC conditions. Works everywhere (even outside AWS with OIDC).
  • Pod Identity (2023): Pod -> Pod Identity Agent (DaemonSet) -> EKS API -> temp creds. Setup: Pod Identity association (1 API call). Works EKS only (simpler but less portable).
# Terraform — much simpler than IRSA
resource "aws_iam_role" "payment_processor" {
name = "${var.cluster_name}-payment-processor"
# Trust policy is simpler — trusts the EKS Pod Identity service
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Service = "pods.eks.amazonaws.com"
}
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
})
}
# Single API call to associate SA with role
resource "aws_eks_pod_identity_association" "payment_processor" {
cluster_name = aws_eks_cluster.main.name
namespace = "payments"
service_account = "payment-sa"
role_arn = aws_iam_role.payment_processor.arn
}
# Install the Pod Identity Agent add-on
resource "aws_eks_addon" "pod_identity_agent" {
cluster_name = aws_eks_cluster.main.name
addon_name = "eks-pod-identity-agent"
}
# Kubernetes side — no annotation needed on the SA!
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-sa
namespace: payments
# No eks.amazonaws.com/role-arn annotation needed
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
spec:
template:
spec:
serviceAccountName: payment-sa
containers:
- name: payment-processor
image: 111111111111.dkr.ecr.us-east-1.amazonaws.com/payment-processor:v2.1
# Pod Identity Agent handles credential injection
AspectIRSAPod Identity
Setup complexityHigh (OIDC provider, trust policy conditions)Low (1 association API call)
Cross-accountTrust policy per account/clusterSame trust policy works across clusters
PortabilityWorks with any OIDC-compatible systemEKS only
SA annotationRequiredNot required
Max associationsUnlimited (IAM role limit)1 role per SA (can have multiple SAs)
Role chainingSupportedSupported
RecommendationExisting clusters, multi-cloudNew EKS deployments

GCP Workload Identity — How Pods Access GCP Services

GCP Workload Identity maps a Kubernetes ServiceAccount (KSA) to a Google Service Account (GSA). Pods using the KSA can call Google APIs as the GSA — no key files needed.

Workload Identity Flow Workload Identity Architecture

Full Workload Identity Setup — Terraform + YAML

Section titled “Full Workload Identity Setup — Terraform + YAML”
# Step 1: Create Google Service Account (GSA)
resource "google_service_account" "payment_processor" {
project = var.workload_project_id
account_id = "payment-processor"
display_name = "Payment Processor (GKE Workload Identity)"
}
# Step 2: Grant GSA the needed permissions
resource "google_project_iam_member" "payment_gcs" {
project = var.workload_project_id
role = "roles/storage.objectViewer"
member = "serviceAccount:${google_service_account.payment_processor.email}"
}
resource "google_project_iam_member" "payment_pubsub" {
project = var.workload_project_id
role = "roles/pubsub.publisher"
member = "serviceAccount:${google_service_account.payment_processor.email}"
}
# Step 3: Bind KSA to GSA (allow KSA to impersonate GSA)
resource "google_service_account_iam_member" "payment_wi_binding" {
service_account_id = google_service_account.payment_processor.name
role = "roles/iam.workloadIdentityUser"
member = "serviceAccount:${var.workload_project_id}.svc.id.goog[payments/payment-sa]"
# ^^^project^^^ ^^^namespace/ksa^^^
}
# Step 4: Ensure Workload Identity is enabled on the cluster
resource "google_container_cluster" "main" {
# ...
workload_identity_config {
workload_pool = "${var.workload_project_id}.svc.id.goog"
}
}
resource "google_container_node_pool" "main" {
# ...
node_config {
workload_metadata_config {
mode = "GKE_METADATA" # Enable GKE metadata server on nodes
}
}
}
# Step 5: Kubernetes ServiceAccount (annotated with GSA email)
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-sa
namespace: payments
annotations:
iam.gke.io/gcp-service-account: payment-processor@bank-prod-workload.iam.gserviceaccount.com
---
# Step 6: Pod using the ServiceAccount
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
spec:
replicas: 3
selector:
matchLabels:
app: payment-processor
template:
metadata:
labels:
app: payment-processor
spec:
serviceAccountName: payment-sa
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: "true"
containers:
- name: payment-processor
image: us-central1-docker.pkg.dev/bank-prod/images/payment-processor:v2.1
# Google Cloud SDK automatically gets tokens from metadata server

# Option 1: SDK-based access (recommended)
# Just use IRSA/WI — the SDK handles auth automatically
# Python example in container:
# import boto3
# s3 = boto3.client('s3') # IRSA injects creds via env vars
# s3.get_object(Bucket='bank-docs', Key='file.pdf')
# Option 2: Mountpoint for S3 / GCS FUSE (mount as filesystem)
# Use case: ML training data, legacy apps that read from disk
apiVersion: v1
kind: Pod
metadata:
name: ml-trainer
namespace: data-science
spec:
serviceAccountName: ml-trainer-sa # IRSA/WI for auth
containers:
- name: trainer
image: bank-ml-trainer:v1
volumeMounts:
- name: training-data
mountPath: /data
volumes:
- name: training-data
csi:
driver: s3.csi.aws.com # Mountpoint for Amazon S3 CSI driver
volumeAttributes:
bucketName: bank-ml-training-data

Pod to RDS via Private Subnet + Security Group

RDS IAM Authentication eliminates passwords entirely. The IRSA role needs rds-db:connect permission, and RDS must have IAM auth enabled. The app connects using a short-lived IAM auth token instead of a static password.

# RDS IAM Authentication (no passwords!)
# IRSA role needs: rds-db:connect permission
# RDS must have IAM auth enabled
# IAM policy for RDS IAM auth
# {
# "Effect": "Allow",
# "Action": "rds-db:connect",
# "Resource": "arn:aws:rds-db:us-east-1:111111111111:dbuser:cluster-ABC123/payment_app"
# }
# App connects using IAM auth token:
# token = rds_client.generate_db_auth_token(
# DBHostname='prod-db.cluster-abc123.us-east-1.rds.amazonaws.com',
# Port=5432,
# DBUsername='payment_app',
# Region='us-east-1'
# )
# conn = psycopg2.connect(host=rds_host, user='payment_app', password=token, ...)

Cloud SQL Auth Proxy Architecture

Cloud SQL Auth Proxy handles TLS encryption, IAM authentication, and connection pooling. Always use the sidecar pattern (not a separate deployment) to keep the proxy lifecycle tied to the app pod. Use --auto-iam-authn to avoid passwords entirely.

# Cloud SQL Auth Proxy as sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
spec:
template:
spec:
serviceAccountName: payment-sa # WI-enabled SA
containers:
# Main application
- name: payment-processor
image: us-central1-docker.pkg.dev/bank-prod/images/payment-processor:v2.1
env:
- name: DB_HOST
value: "localhost" # Connects to sidecar
- name: DB_PORT
value: "5432"
- name: DB_NAME
value: "payments"
- name: DB_IAM_USER
value: "payment-processor@bank-prod-workload.iam" # IAM auth
# Cloud SQL Auth Proxy sidecar
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.0
args:
- "--structured-logs"
- "--auto-iam-authn" # IAM authentication
- "--private-ip" # Use private IP
- "bank-prod-workload:me-central1:payments-db" # connection name
securityContext:
runAsNonRoot: true
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"

Direct SDK access from pods works but creates tight coupling. External Secrets Operator is the better pattern — see the next section.

# Direct SDK access (simple but not recommended at scale)
# Pod with IRSA/WI fetches secrets on startup:
#
# AWS:
# client = boto3.client('secretsmanager')
# secret = client.get_secret_value(SecretId='prod/payments/db-password')
#
# GCP:
# from google.cloud import secretmanager
# client = secretmanager.SecretManagerServiceClient()
# response = client.access_secret_version(name='projects/123/secrets/db-password/versions/latest')

ESO synchronizes secrets from external stores (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault) into Kubernetes Secrets. Pods consume standard K8s Secrets — they do not know or care where the secret came from.

External Secrets Operator Flow

# Helm install ESO
resource "helm_release" "external_secrets" {
name = "external-secrets"
repository = "https://charts.external-secrets.io"
chart = "external-secrets"
namespace = "external-secrets"
version = "0.10.4"
create_namespace = true
set {
name = "installCRDs"
value = "true"
}
}
# ClusterSecretStore — cluster-wide, managed by platform team
# Uses IRSA for authentication
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-manager
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
namespace: external-secrets
---
# ServiceAccount for ESO (IRSA-enabled)
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-secrets-sa
namespace: external-secrets
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/external-secrets-operator
---
# ExternalSecret — created by app team in their namespace
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: payment-db-credentials
namespace: payments
spec:
refreshInterval: 1h # Sync every hour
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: payment-db-secret # K8s Secret name to create
creationPolicy: Owner # ESO owns the secret lifecycle
data:
- secretKey: username # Key in the K8s Secret
remoteRef:
key: prod/payments/db # AWS Secrets Manager secret name
property: username # JSON key within the secret
- secretKey: password
remoteRef:
key: prod/payments/db
property: password
- secretKey: host
remoteRef:
key: prod/payments/db
property: host
---
# Pod mounts the K8s Secret as usual — no AWS SDK needed
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
spec:
template:
spec:
containers:
- name: payment-processor
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: payment-db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: payment-db-secret
key: password
# Terraform — IAM role for ESO
resource "aws_iam_role_policy" "eso_secrets_access" {
name = "eso-secrets-access"
role = aws_iam_role.external_secrets.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
]
# Scope to specific secret prefixes per environment
Resource = "arn:aws:secretsmanager:us-east-1:111111111111:secret:prod/*"
}
]
})
}
# Pattern 1: Template — compose secrets from multiple sources
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: payment-connection-string
namespace: payments
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: payment-connection
template:
type: Opaque
data:
connection_string: "postgresql://{{ .username }}:{{ .password }}@{{ .host }}:5432/payments?sslmode=require"
data:
- secretKey: username
remoteRef:
key: prod/payments/db
property: username
- secretKey: password
remoteRef:
key: prod/payments/db
property: password
- secretKey: host
remoteRef:
key: prod/payments/db
property: host
---
# Pattern 2: Find — sync all secrets matching a pattern
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: all-payment-secrets
namespace: payments
spec:
refreshInterval: 30m
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: payment-secrets-bundle
dataFrom:
- find:
name:
regexp: "^prod/payments/.*" # All secrets under prod/payments/

In enterprise setups, the EKS cluster is in a Workload Account but needs to access resources in other accounts (Shared Services, Data Platform).

Cross-Account IRSA

# Option 1: Direct cross-account IRSA (recommended)
# The OIDC trust policy in Account 222222 trusts Account 111111's OIDC provider
# In Shared Services Account (222222)
resource "aws_iam_role" "cross_account_ecr" {
provider = aws.shared_services
name = "workload-eks-ecr-access"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Federated = "arn:aws:iam::111111111111:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/ABC123"
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"oidc.eks.us-east-1.amazonaws.com/id/ABC123:sub" = "system:serviceaccount:ci:deployer-sa"
}
}
}
]
})
}

Scenario 1: “How do pods in EKS securely access S3 without static credentials?”

Section titled “Scenario 1: “How do pods in EKS securely access S3 without static credentials?””

Answer:

“I would use IRSA — IAM Roles for Service Accounts. Here is how it works end-to-end:”

  1. EKS cluster registers as an OIDC identity provider in IAM. This happens automatically when you create the cluster. The OIDC issuer URL is unique per cluster.

  2. Create an IAM role with a trust policy that allows the OIDC provider to assume it. The trust policy includes conditions that lock it to a specific namespace and ServiceAccount name.

  3. Create a Kubernetes ServiceAccount annotated with the IAM role ARN.

  4. Pod starts with that ServiceAccount. The EKS webhook mutates the pod spec to:

    • Mount a projected service account token (JWT) at /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    • Set AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE environment variables
  5. AWS SDK in the pod detects these env vars, calls STS AssumeRoleWithWebIdentity with the JWT, and receives temporary credentials (access key, secret key, session token).

  6. Pod calls S3 using temporary credentials. Credentials auto-refresh when they expire.

Security properties:

  • No static credentials anywhere — not in code, not in K8s Secrets, not in env vars
  • Each pod gets its own temporary credentials
  • IAM role is scoped to specific namespace + ServiceAccount
  • Credentials expire in 15 minutes (configurable up to 12 hours)
  • Full CloudTrail audit trail with pod identity in session tags

Scenario 2: “Compare IRSA vs EKS Pod Identity — when would you use each?”

Section titled “Scenario 2: “Compare IRSA vs EKS Pod Identity — when would you use each?””

Answer:

“Both eliminate static credentials. Pod Identity is newer and simpler. Here is when I would choose each:”

Use Pod Identity for:

  • New EKS clusters (2024+)
  • Simpler setup — one aws_eks_pod_identity_association resource vs OIDC configuration
  • Cross-account access — trust policy uses pods.eks.amazonaws.com service principal, no need to manage OIDC provider ARNs across accounts
  • Large-scale environments — associations are managed at the EKS API level, not in K8s annotations

Use IRSA for:

  • Existing clusters already using IRSA — migration has no benefit unless you are simplifying
  • Multi-cloud setups where you want the same OIDC federation pattern for AWS and other providers
  • Edge cases where you need more than 1 role per ServiceAccount (IRSA supports this)
  • Self-managed Kubernetes (kops, kubeadm) where Pod Identity is not available

Key difference in trust policy:

IRSA trust policy condition:
"oidc.eks.us-east-1.amazonaws.com/id/ABC123:sub" = "system:serviceaccount:payments:payment-sa"
↑ Must know the OIDC provider ID — different per cluster
Pod Identity trust policy:
"Principal": { "Service": "pods.eks.amazonaws.com" }
↑ Same for ALL clusters — much simpler cross-account

Scenario 3: “Your pod can’t authenticate to GCP APIs. How do you debug Workload Identity?”

Section titled “Scenario 3: “Your pod can’t authenticate to GCP APIs. How do you debug Workload Identity?””

Answer:

“This is a multi-layer problem. I debug from the inside out — pod, node, IAM binding, GSA.”

Step 1: Verify the pod’s service account

Terminal window
kubectl get pod payment-processor-abc123 -n payments -o yaml | grep serviceAccountName
# Expected: payment-sa
kubectl get sa payment-sa -n payments -o yaml | grep "iam.gke.io/gcp-service-account"
# Expected: payment-processor@bank-prod-workload.iam.gserviceaccount.com

Step 2: Check node metadata server

Terminal window
# Exec into the pod and query metadata server
kubectl exec -it payment-processor-abc123 -n payments -- sh
# Check if metadata server is reachable
curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email
# Expected: payment-processor@bank-prod-workload.iam.gserviceaccount.com
# If this returns the NODE's SA email, WI is not configured on the node pool
# Get a token
curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
# If this fails, the GKE metadata server is not running

Step 3: Verify node pool has GKE_METADATA mode

Terminal window
gcloud container node-pools describe main-pool \
--cluster=prod-gke-01 \
--region=me-central1 \
--format="value(config.workloadMetadataConfig.mode)"
# Expected: GKE_METADATA
# If MODE is blank or GCE_METADATA, WI is not enabled on this node pool

Step 4: Verify IAM binding

Terminal window
gcloud iam service-accounts get-iam-policy \
payment-processor@bank-prod-workload.iam.gserviceaccount.com \
--format=json | jq '.bindings[] | select(.role == "roles/iam.workloadIdentityUser")'
# Expected member: "serviceAccount:bank-prod-workload.svc.id.goog[payments/payment-sa]"

Step 5: Verify GSA has the needed permissions

Terminal window
gcloud projects get-iam-policy bank-prod-workload \
--flatten="bindings[].members" \
--filter="bindings.members:payment-processor@bank-prod-workload.iam.gserviceaccount.com" \
--format="table(bindings.role)"
# Check that the required role (e.g., roles/storage.objectViewer) is listed

Common failure modes:

SymptomRoot CauseFix
Metadata returns node SA emailWI not enabled on node poolSet workload_metadata_config.mode = GKE_METADATA
Metadata returns 404GKE metadata server not runningRestart node or recreate pool
”Permission denied” on APIGSA missing IAM roleGrant the role to the GSA
”IAM denied” on token exchangeMissing workloadIdentityUser bindingAdd KSA→GSA binding
Wrong projectKSA annotated with wrong GSA emailFix annotation

Scenario 4: “Design secure database connectivity from EKS to RDS”

Section titled “Scenario 4: “Design secure database connectivity from EKS to RDS””

Answer:

“I would use private networking, security group isolation, IRSA for IAM database authentication, and secrets management as defense in depth.”

Pod to RDS Access Architecture Layer 1: Network isolation

  • RDS in private subnet with no internet gateway route
  • Security group on RDS allows inbound 5432 only from pod security group
  • No public endpoint on RDS

Layer 2: Authentication — IRSA + IAM database auth

  • Pod uses IRSA to get IAM credentials
  • RDS IAM auth generates short-lived token (15 min) instead of password
  • Eliminates password rotation problem entirely

Layer 3: Encryption

  • RDS encrypted at rest with KMS (customer-managed key)
  • TLS in transit enforced (rds.force_ssl = 1)
  • Connection string uses sslmode=verify-full

Layer 4: Monitoring

  • RDS Performance Insights for query monitoring
  • CloudTrail logs IAM auth events
  • VPC Flow Logs capture all connection attempts

Scenario 5: “How do you manage secrets for 50 microservices on Kubernetes?”

Section titled “Scenario 5: “How do you manage secrets for 50 microservices on Kubernetes?””

Answer:

“I would use External Secrets Operator with a centralized secret store, namespace-scoped access, automated rotation, and GitOps-driven secret references.”

Architecture: 50 Microservices — Secrets Management

Key design decisions:

  1. One ClusterSecretStore per cluster — ESO controller uses a single IRSA/WI identity to access the secret store. Simpler than per-namespace SecretStores.

  2. Path-based access control — secrets are organized as {env}/{team}/{service}/{secret-name}. OPA Gatekeeper enforces that team-a can only reference paths starting with prod/team-a/.

  3. Refresh interval — set to 1 hour for most secrets, 5 minutes for actively-rotating credentials. ESO will detect changes in the external store and update the K8s Secret.

  4. Secret rotation workflow:

    1. Rotate secret in AWS Secrets Manager (manual or Lambda rotation)
    2. ESO detects change on next refresh (≤ 1 hour)
    3. ESO updates K8s Secret
    4. Pod picks up new secret:
    - If mounted as volume → kubelet updates file within 60s
    - If env var → pod restart required (use Reloader or stakater/reloader)
  5. Audit trail — CloudTrail/Cloud Audit Logs capture every secret access. ESO reduces the number of principals accessing secrets (1 ESO SA per cluster vs 50 service SAs).

# Stakater Reloader — auto-restart pods when secrets change
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: payments
annotations:
reloader.stakater.com/auto: "true" # Watch all secrets/configmaps used by this deployment
spec:
template:
spec:
containers:
- name: payment-processor
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: payment-db-secret # When ESO updates this, Reloader restarts the pod
key: password

Identity Federation Decision Tree

Terminal window
# IRSA — verify token is mounted
kubectl exec -it pod-name -- ls /var/run/secrets/eks.amazonaws.com/serviceaccount/
kubectl exec -it pod-name -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token | jwt decode -
# IRSA — verify env vars
kubectl exec -it pod-name -- env | grep AWS
# Pod Identity — verify agent is running
kubectl get pods -n kube-system -l app.kubernetes.io/name=eks-pod-identity-agent
# Workload Identity — verify metadata server
kubectl exec -it pod-name -- curl -s -H "Metadata-Flavor: Google" \
http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email
# ESO — check sync status
kubectl get externalsecrets -A
kubectl describe externalsecret payment-db-credentials -n payments