Landing Zones — Multi-Account Strategy
Where This Fits
Section titled “Where This Fits”This is the foundation — every other topic (IAM, networking, Kubernetes, security, observability) builds on the organizational structure created here.
As the central infrastructure team, you own the landing zone. You design the OU/folder hierarchy, write SCPs and org policies, build the account vending machine, and define the security baseline that every new account gets automatically. Tenant teams submit a request and receive a fully-configured, network-connected, security-baselined account within minutes.
What is a Landing Zone?
Section titled “What is a Landing Zone?”A landing zone is a pre-configured, secure, multi-account/project cloud environment that provides:
- Account/project structure — logical separation by team, environment, and function
- Identity and access management — centralized SSO, role-based access
- Networking — hub-and-spoke VPCs, centralized egress/ingress, DNS
- Security baseline — logging, monitoring, encryption, guardrails
- Governance — preventive controls, detective controls, cost management
- Automation — account/project vending, baseline deployment, drift detection
Why enterprises need one BEFORE any workload migration:
- Without a landing zone, teams create accounts/projects ad-hoc with no standards
- No centralized logging = no audit trail = compliance failure
- No network architecture = no private connectivity between workloads
- No guardrails = shadow IT, cost overruns, security incidents
- Retrofitting governance onto 100 unmanaged accounts is 10x harder than building it right from the start
Guardrails Taxonomy
Section titled “Guardrails Taxonomy”Every landing zone enforces guardrails at three levels:
| Type | Purpose | AWS Implementation | GCP Implementation |
|---|---|---|---|
| Preventive | Block actions before they happen | SCPs, IAM permission boundaries | Org policies, IAM deny policies |
| Detective | Alert when violations occur | AWS Config rules, Security Hub | SCC findings, Cloud Asset Inventory |
| Corrective | Auto-remediate violations | Config auto-remediation, Lambda | Cloud Functions triggered by SCC |
Landing Zone Implementation
Section titled “Landing Zone Implementation”AWS uses account-based isolation managed through Control Tower and Account Factory for Terraform (AFT). GCP uses project-based isolation with a folder hierarchy managed through Project Factory. Both achieve multi-tenant isolation with guardrails, but the primitives differ.
AWS Landing Zone — Control Tower and Beyond
Section titled “AWS Landing Zone — Control Tower and Beyond”AWS Control Tower — The Managed Solution
Section titled “AWS Control Tower — The Managed Solution”AWS Control Tower provides a managed landing zone with:
Control Tower Components
Section titled “Control Tower Components”| Component | Purpose |
|---|---|
| Landing Zone | The overall multi-account environment (currently version 4.0) |
| Organizational Units (OUs) | Logical grouping for accounts (Security, Sandbox, Workloads) |
| Controls (Guardrails) | Preventive (SCP-based), detective (Config-based), proactive (CloudFormation hooks) |
| Account Factory | Console-based or API-driven account provisioning with baselines |
| Dashboard | Compliance status across all accounts and controls |
| Landing Zone APIs | Programmatic management of baselines and controls |
Service Control Policies (SCPs) — Deep Dive
Section titled “Service Control Policies (SCPs) — Deep Dive”SCPs define the maximum permissions for accounts in an OU. They do not grant permissions — they act as a guardrail (similar to permission boundaries but at the account level).
Essential SCPs for an enterprise bank:
// SCP 1: Deny regions outside approved list{ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyUnapprovedRegions", "Effect": "Deny", "NotAction": [ "iam:*", "sts:*", "organizations:*", "support:*", "budgets:*", "cloudfront:*", "route53:*", "wafv2:*" ], "Resource": "*", "Condition": { "StringNotEquals": { "aws:RequestedRegion": [ "ap-southeast-1", "me-south-1", "eu-west-1" ] } } } ]}// SCP 2: Prevent disabling security services{ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyDisablingSecurityServices", "Effect": "Deny", "Action": [ "guardduty:DeleteDetector", "guardduty:DisassociateFromMasterAccount", "guardduty:UpdateDetector", "securityhub:DisableSecurityHub", "securityhub:DeleteMembers", "config:StopConfigurationRecorder", "config:DeleteConfigurationRecorder", "cloudtrail:StopLogging", "cloudtrail:DeleteTrail", "access-analyzer:DeleteAnalyzer" ], "Resource": "*" } ]}// SCP 3: Deny creating IAM users and access keys (force SSO/roles){ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyIAMUsersAndKeys", "Effect": "Deny", "Action": [ "iam:CreateUser", "iam:CreateAccessKey", "iam:CreateLoginProfile" ], "Resource": "*", "Condition": { "StringNotLike": { "aws:PrincipalArn": [ "arn:aws:iam::*:role/BreakGlassRole" ] } } } ]}// SCP 4: Deny public S3 buckets{ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyPublicS3", "Effect": "Deny", "Action": [ "s3:PutBucketPublicAccessBlock" ], "Resource": "*", "Condition": { "StringNotEquals": { "s3:PublicAccessBlockConfiguration/BlockPublicAcls": "true", "s3:PublicAccessBlockConfiguration/BlockPublicPolicy": "true", "s3:PublicAccessBlockConfiguration/IgnorePublicAcls": "true", "s3:PublicAccessBlockConfiguration/RestrictPublicBuckets": "true" } } } ]}Account Factory for Terraform (AFT)
Section titled “Account Factory for Terraform (AFT)”AFT is the Terraform-native way to provision accounts through Control Tower. It uses a GitOps workflow:
AFT Repository Structure:
AFT Account Request example:
module "team_alpha_prod" { source = "./modules/aft-account-request"
control_tower_parameters = { AccountEmail = "aws+team-alpha-prod@bank.com" AccountName = "team-alpha-prod" ManagedOrganizationalUnit = "Workloads/Production" SSOUserEmail = "platform-admin@bank.com" SSOUserFirstName = "Platform" SSOUserLastName = "Admin" }
account_tags = { Team = "team-alpha" Environment = "production" CostCenter = "CC-1234" DataClass = "confidential" Compliance = "pci-dss" }
account_customizations_name = "team-alpha-prod"
change_management_parameters = { change_requested_by = "platform-team" change_reason = "New production account for Team Alpha payments service" }}Global Baseline — What Every Account Gets
Section titled “Global Baseline — What Every Account Gets”# This runs in EVERY new account automatically
# 1. VPC from IPAMmodule "vpc" { source = "../../modules/baseline-vpc"
ipam_pool_id = data.aws_ssm_parameter.ipam_pool_id.value netmask_length = 22 # /22 = 1024 IPs per account az_count = 3
enable_flow_logs = true flow_log_destination = data.aws_ssm_parameter.central_log_bucket_arn.value}
# 2. Transit Gateway attachment (connect to hub)resource "aws_ec2_transit_gateway_vpc_attachment" "hub" { subnet_ids = module.vpc.private_subnet_ids transit_gateway_id = data.aws_ssm_parameter.tgw_id.value vpc_id = module.vpc.vpc_id
transit_gateway_default_route_table_association = false transit_gateway_default_route_table_propagation = false
tags = { Name = "tgw-attach-${var.account_name}" }}
# 3. IAM baseline rolesresource "aws_iam_role" "admin_role" { name = "PlatformAdminRole" assume_role_policy = data.aws_iam_policy_document.sso_trust.json}
resource "aws_iam_role" "readonly_role" { name = "DeveloperReadOnlyRole" assume_role_policy = data.aws_iam_policy_document.sso_trust.json}
# 4. AWS Config recorderresource "aws_config_configuration_recorder" "main" { name = "default" role_arn = aws_iam_role.config_role.arn
recording_group { all_supported = true include_global_resource_types = true }}
resource "aws_config_delivery_channel" "main" { name = "default" s3_bucket_name = data.aws_ssm_parameter.config_bucket.value depends_on = [aws_config_configuration_recorder.main]}
# 5. GuardDuty member (auto-join delegated admin)resource "aws_guardduty_member" "member" { provider = aws.security_admin account_id = var.account_id detector_id = data.aws_guardduty_detector.admin.id email = var.account_email invite = true}
# 6. Security Hub memberresource "aws_securityhub_member" "member" { provider = aws.security_admin account_id = var.account_id email = var.account_email}
# 7. Default EBS encryptionresource "aws_ebs_encryption_by_default" "enabled" { enabled = true}
# 8. S3 public access block (account-level)resource "aws_s3_account_public_access_block" "block" { block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true}GCP Resource Hierarchy
Section titled “GCP Resource Hierarchy”GCP uses a different model: Organization → Folders → Projects → Resources. Key differences from AWS:
| Aspect | AWS | GCP |
|---|---|---|
| Container | Account (hard boundary) | Project (softer boundary) |
| Grouping | OU (flat hierarchy of accounts) | Folders (nestable, up to 10 levels) |
| Policy inheritance | SCPs inherit down OUs | IAM AND org policies inherit down folders |
| Network isolation | VPC per account (default) | Shared VPC across projects (common) |
| Billing | One billing per account (or consolidated) | One billing account for many projects |
Organization Policies — GCP’s Guardrails
Section titled “Organization Policies — GCP’s Guardrails”Organization policies are constraints applied to the resource hierarchy (org, folder, project). They INHERIT downward and can be overridden at lower levels (unless inheritFromParent is enforced).
Essential org policies for an enterprise bank:
| Constraint | Purpose | Value |
|---|---|---|
constraints/compute.vmExternalIpAccess | Deny external IPs on VMs | DENY ALL |
constraints/iam.disableServiceAccountKeyCreation | Force WIF/impersonation | enforced: true |
constraints/iam.automaticIamGrantsForDefaultServiceAccounts | No Editor on default SAs | enforced: true |
constraints/compute.restrictSharedVpcHostProjects | Only approved host projects | Allowlist |
constraints/gcp.resourceLocations | Data residency (e.g., Middle East only) | in:me-central1-locations, in:me-central2-locations |
constraints/compute.restrictVpcPeering | Control network connectivity | Allowlist approved projects |
constraints/storage.uniformBucketLevelAccess | Enforce uniform bucket access | enforced: true |
constraints/sql.restrictPublicIp | No public IPs on Cloud SQL | enforced: true |
constraints/compute.requireShieldedVm | Require Shielded VMs | enforced: true |
constraints/iam.allowedPolicyMemberDomains | Only bank.com identities | bank.com customer ID |
Custom Organization Policies
Section titled “Custom Organization Policies”When built-in constraints are not granular enough, create custom constraints:
name: organizations/123456/customConstraints/custom.restrictGKENodePoolMachineTypesresource_types: - container.googleapis.com/NodePoolmethod_types: - CREATE - UPDATEcondition: > resource.config.machineType.contains("e2-") || resource.config.machineType.contains("n2d-")action_type: ALLOWdisplay_name: "Restrict GKE node pool machine types to e2 and n2d families"description: "Only allow cost-effective machine families for GKE node pools"GCP Project Factory — Terraform Module
Section titled “GCP Project Factory — Terraform Module”# Creates a new project with full baseline
resource "google_project" "project" { name = var.project_name project_id = var.project_id org_id = var.org_id folder_id = var.folder_id billing_account = var.billing_account_id auto_create_network = false # We use Shared VPC, not default network
labels = { team = var.team_name environment = var.environment cost_center = var.cost_center managed_by = "platform-team" }}
# Enable required APIsresource "google_project_service" "apis" { for_each = toset([ "compute.googleapis.com", "container.googleapis.com", "cloudresourcemanager.googleapis.com", "iam.googleapis.com", "logging.googleapis.com", "monitoring.googleapis.com", "secretmanager.googleapis.com", "artifactregistry.googleapis.com", "sqladmin.googleapis.com", "cloudkms.googleapis.com", ])
project = google_project.project.project_id service = each.value
disable_dependent_services = false disable_on_destroy = false}
# Disable default service accounts (Editor role is too broad)resource "google_project_default_service_accounts" "disable" { project = google_project.project.project_id action = "DISABLE"
depends_on = [google_project_service.apis]}
# Attach as Shared VPC service projectresource "google_compute_shared_vpc_service_project" "service" { host_project = var.shared_vpc_host_project service_project = google_project.project.project_id
depends_on = [google_project_service.apis]}
# Grant team group accessresource "google_project_iam_member" "team_viewer" { project = google_project.project.project_id role = "roles/viewer" member = "group:${var.team_name}@bank.com"}
resource "google_project_iam_member" "team_developer" { project = google_project.project.project_id role = var.environment == "production" ? "roles/viewer" : "roles/editor" member = "group:${var.team_name}-devs@bank.com"}
# Log sink to centralized logging projectresource "google_logging_project_sink" "central" { name = "central-audit-sink" project = google_project.project.project_id destination = "logging.googleapis.com/projects/${var.logging_project}/locations/global/buckets/central-audit"
filter = "logName:\"cloudaudit.googleapis.com\""
unique_writer_identity = true}
# Grant the log sink writer access to the central logging bucketresource "google_project_iam_member" "log_writer" { project = var.logging_project role = "roles/logging.bucketWriter" member = google_logging_project_sink.central.writer_identity}
# Billing budget alertresource "google_billing_budget" "budget" { billing_account = var.billing_account_id display_name = "${var.project_name}-budget"
budget_filter { projects = ["projects/${google_project.project.number}"] }
amount { specified_amount { currency_code = "USD" units = var.monthly_budget_usd } }
threshold_rules { threshold_percent = 0.5 } threshold_rules { threshold_percent = 0.8 } threshold_rules { threshold_percent = 1.0 spend_basis = "FORECASTED_SPEND" }
all_updates_rule { monitoring_notification_channels = [var.notification_channel_id] disable_default_iam_recipients = false }}
# Org policy overrides for sandbox (relax constraints)resource "google_org_policy_policy" "sandbox_external_ip" { count = var.environment == "sandbox" ? 1 : 0 name = "projects/${google_project.project.project_id}/policies/compute.vmExternalIpAccess" parent = "projects/${google_project.project.project_id}"
spec { reset = true # Remove the inherited constraint for sandbox }}Using the Project Factory Module
Section titled “Using the Project Factory Module”module "team_alpha_prod" { source = "../../modules/project-factory"
project_name = "Team Alpha Production" project_id = "bank-team-alpha-prod" folder_id = google_folder.production_team_alpha.id org_id = var.org_id billing_account_id = var.billing_account_id
team_name = "team-alpha" environment = "production" cost_center = "CC-1234"
shared_vpc_host_project = var.shared_vpc_host_project logging_project = var.logging_project monthly_budget_usd = 5000
notification_channel_id = var.finops_notification_channel}GCP Folder Hierarchy — Terraform
Section titled “GCP Folder Hierarchy — Terraform”# Organization-level folder structureresource "google_folder" "shared_infra" { display_name = "Shared Infrastructure" parent = "organizations/${var.org_id}"}
resource "google_folder" "production" { display_name = "Production" parent = "organizations/${var.org_id}"}
resource "google_folder" "non_production" { display_name = "Non-Production" parent = "organizations/${var.org_id}"}
resource "google_folder" "staging" { display_name = "Staging" parent = google_folder.non_production.name}
resource "google_folder" "development" { display_name = "Development" parent = google_folder.non_production.name}
resource "google_folder" "sandbox" { display_name = "Sandbox" parent = "organizations/${var.org_id}"}
# Per-team folders under productionresource "google_folder" "prod_teams" { for_each = toset(var.team_names) display_name = each.value parent = google_folder.production.name}
# Org policies at folder levelresource "google_org_policy_policy" "prod_no_external_ip" { name = "${google_folder.production.name}/policies/compute.vmExternalIpAccess" parent = google_folder.production.name
spec { rules { enforce = "TRUE" } }}
resource "google_org_policy_policy" "prod_restrict_regions" { name = "${google_folder.production.name}/policies/gcp.resourceLocations" parent = google_folder.production.name
spec { rules { values { allowed_values = [ "in:me-central1-locations", "in:me-central2-locations", "in:asia-southeast1-locations", ] } } }}
resource "google_org_policy_policy" "prod_no_sa_keys" { name = "${google_folder.production.name}/policies/iam.disableServiceAccountKeyCreation" parent = google_folder.production.name
spec { rules { enforce = "TRUE" } }}Assured Workloads and VPC Service Controls
Section titled “Assured Workloads and VPC Service Controls”For a regulated bank, GCP provides additional compliance features:
| Feature | Purpose |
|---|---|
| Assured Workloads | Create workload environments compliant with specific regimes (CJIS, FedRAMP, etc.) with data residency and personnel controls |
| VPC Service Controls | Create security perimeters around GCP resources to prevent data exfiltration — even with valid IAM credentials |
| Access Context Manager | Define access levels based on IP, device status, identity for VPC SC ingress/egress rules |
| Cloud Asset Inventory | Real-time inventory of all resources across the org — query with SQL-like syntax |
AWS vs GCP Landing Zone Comparison
Section titled “AWS vs GCP Landing Zone Comparison”| Aspect | AWS | GCP |
|---|---|---|
| Managed solution | Control Tower | No equivalent (use CFT or Fabric) |
| Account factory | AFT (Account Factory for Terraform) | Custom Terraform project factory |
| Guardrails | SCPs (deny-list, full IAM language) | Org policies (boolean constraints + custom) |
| Account grouping | OUs (flat, accounts can only be in one OU) | Folders (nestable up to 10 levels) |
| Network isolation | VPC per account (default) | Shared VPC across projects (common) |
| Centralized logging | Org CloudTrail to Log Archive account | Org-level log sinks to logging project |
| Security posture | Security Hub + GuardDuty (delegated admin) | SCC (org-level) + Assured Workloads |
| Compliance | AWS Artifact, Config conformance packs | Assured Workloads, compliance reports |
| Cost management | AWS Budgets, Cost Explorer (consolidated) | Billing budgets per project, BigQuery export |
| Policy language | JSON (same as IAM), conditions, NotAction | Boolean constraints + CEL for custom |
| Drift detection | Config rules, Control Tower drift alerts | Cloud Asset Inventory, SCC findings |
| Break-glass | IAM Identity Center emergency access + SCP exemption | Emergency access via dedicated SA + org policy exception |
Account/Project Vending Machine — The Full Workflow
Section titled “Account/Project Vending Machine — The Full Workflow”Regardless of cloud, the vending pattern is the same:
IPAM Integration for CIDR Management
Section titled “IPAM Integration for CIDR Management”When vending accounts at scale, CIDR conflicts are a real risk. Use centralized IP Address Management:
# AWS VPC IPAM — central pool allocates CIDRs automaticallyresource "aws_vpc_ipam" "main" { operating_regions { region_name = "ap-southeast-1" } operating_regions { region_name = "me-south-1" }}
resource "aws_vpc_ipam_pool" "workloads" { ipam_scope_id = aws_vpc_ipam.main.private_default_scope_id address_family = "ipv4" locale = "ap-southeast-1"}
resource "aws_vpc_ipam_pool_cidr" "workloads" { ipam_pool_id = aws_vpc_ipam_pool.workloads.id cidr = "10.0.0.0/12" # 10.0.0.0 - 10.15.255.255}
# In account factory baseline — VPC gets CIDR from IPAM poolresource "aws_vpc" "main" { ipv4_ipam_pool_id = data.aws_ssm_parameter.ipam_pool_id.value ipv4_netmask_length = 22 # Each account gets a /22 (1024 IPs)}# GCP uses Shared VPC — subnets are allocated in the host project# and shared to service projects
# In the network-hub project (Shared VPC host)resource "google_compute_subnetwork" "team_alpha_prod" { name = "team-alpha-prod-subnet" project = var.shared_vpc_host_project network = google_compute_network.shared_vpc.name region = "me-central1" ip_cidr_range = "10.1.0.0/22" # Allocated by platform team
secondary_ip_range { range_name = "pods" ip_cidr_range = "10.64.0.0/16" } secondary_ip_range { range_name = "services" ip_cidr_range = "10.65.0.0/20" }
private_ip_google_access = true log_config { aggregation_interval = "INTERVAL_5_SEC" flow_sampling = 0.5 metadata = "INCLUDE_ALL_METADATA" }}
# Grant the service project access to this subnetresource "google_compute_subnetwork_iam_member" "team_alpha" { project = var.shared_vpc_host_project region = "me-central1" subnetwork = google_compute_subnetwork.team_alpha_prod.name role = "roles/compute.networkUser" member = "serviceAccount:${var.team_alpha_sa_email}"}Interview Scenarios
Section titled “Interview Scenarios”Scenario 1: Create 50 AWS Accounts for 10 Teams
Section titled “Scenario 1: Create 50 AWS Accounts for 10 Teams”Q: “Create 50 AWS accounts for 10 teams across dev/staging/prod — walk through the architecture and automation.”
Model Answer:
OU Structure:
Automation with AFT:
-
AFT deployment: Deploy AFT in a dedicated AFT management account within the Infrastructure OU. Use AFT version 1.15.0+ with Terraform 1.6+.
-
Account requests via Git: Each account is defined in a
.tffile in theaft-account-requestrepo. For 50 accounts across 10 teams with 5 environments each:
# Generate all accounts programmaticallylocals { teams = ["alpha", "beta", "gamma", "delta", "epsilon", "zeta", "eta", "theta", "iota", "kappa"] environments = ["dev", "staging", "prod"]
accounts = flatten([ for team in local.teams : [ for env in local.environments : { name = "team-${team}-${env}" email = "aws+team-${team}-${env}@bank.com" ou = "Workloads/${title(env)}" team = team env = env } ] ])}-
Global baseline (applied to every account automatically):
- VPC with /22 CIDR from IPAM (no overlap possible)
- Transit Gateway attachment to Network Hub
- AWS Config recorder → central Log Archive bucket
- GuardDuty member → Security Tooling Account
- Security Hub member → Security Tooling Account
- IAM roles: PlatformAdmin, DeveloperReadOnly, AppDeployer
- EBS default encryption enabled
- S3 public access block (account-level)
- CloudWatch log group for application logs
-
SCPs per OU:
- Production OU: Deny region outside approved list, deny termination without tag, deny disabling security services, deny IAM user creation
- Staging OU: Same as prod but allow wider instance types
- Development OU: Relaxed — allow more services, still deny IAM users and public access
- Sandbox OU: Most relaxed — auto-nuke resources after 7 days, $100/month budget alarm
-
SSO (IAM Identity Center):
- Federate from corporate Okta
- Permission sets: PlatformAdmin, TeamDeveloper, ReadOnly, BreakGlass
- Assign
team-alpha-devsOkta group → TeamDeveloper on team-alpha-dev, team-alpha-staging - Assign
team-alpha-devsOkta group → ReadOnly on team-alpha-prod (they can view but not change prod directly — CI/CD deploys)
-
Timeline: Initial setup of Control Tower + AFT takes 2-3 days. Vending 50 accounts takes ~2 hours (AFT processes them sequentially, ~2-3 minutes per account). Global baseline applies automatically.
Scenario 2: Create 50 GCP Projects for 10 Teams
Section titled “Scenario 2: Create 50 GCP Projects for 10 Teams”Q: “Do the same on GCP — create 50 projects for 10 teams.”
Model Answer:
Folder Structure:
Automation — Terraform Project Factory:
# Vend all 50 projects in a looplocals { teams = ["alpha", "beta", "gamma", "delta", "epsilon", "zeta", "eta", "theta", "iota", "kappa"] environments = { dev = { folder_id = google_folder.development.id, budget = 1000 } staging = { folder_id = google_folder.staging.id, budget = 3000 } prod = { folder_id = google_folder.production.id, budget = 10000 } }}
module "team_projects" { source = "./modules/project-factory" for_each = { for item in flatten([ for team in local.teams : [ for env, config in local.environments : { key = "team-${team}-${env}" project_id = "bank-${team}-${env}" project_name = "Team ${title(team)} ${title(env)}" folder_id = config.folder_id team_name = team environment = env budget = config.budget } ] ]) : item.key => item }
project_name = each.value.project_name project_id = each.value.project_id folder_id = each.value.folder_id org_id = var.org_id billing_account_id = var.billing_account_id team_name = each.value.team_name environment = each.value.environment shared_vpc_host_project = var.shared_vpc_host_project logging_project = var.logging_project monthly_budget_usd = each.value.budget notification_channel_id = var.finops_channel}Key differences from AWS approach:
-
Shared VPC instead of per-project VPCs: All projects are service projects attached to the network-hub host project. Subnets are centrally managed — teams cannot create their own networks.
-
Org policies instead of SCPs: Applied at folder level, inherited by all projects. Production folder gets strict policies (no external IPs, no SA keys, restricted regions). Dev folder gets relaxed policies.
-
Google Groups for IAM: Map
team-alpha@bank.comgroup toroles/vieweron the production project androles/editoron the dev project. No per-user IAM assignments. -
No “Account Factory” equivalent: GCP does not have a Control Tower equivalent. You build your own project factory with Terraform modules. The Google Cloud Foundation Fabric and CFT (Cloud Foundation Toolkit) provide reference modules.
-
Billing budgets per project: Each project gets a billing budget with alerts at 50%, 80%, and 100% forecasted spend.
Scenario 3: Ensuring Security Baseline on Every New Account
Section titled “Scenario 3: Ensuring Security Baseline on Every New Account”Q: “How do you ensure every new account/project gets the same security baseline automatically?”
Model Answer:
This is a pipeline problem. The key is: no human touches the new account/project directly — everything is automated.
Defense in depth — three layers:
-
Preventive (before it happens):
- SCPs/org policies PREVENT creating public resources, disabling security tools, using unapproved regions
- Permission boundaries CAP what tenant roles can do
-
Detective (catch violations):
- AWS Config rules or GCP SCC findings detect non-compliant resources
- Daily Terraform plan detects baseline drift
- Cloud Asset Inventory queries find resources without required tags
-
Corrective (auto-fix):
- Config auto-remediation closes open security groups within 60 seconds
- Lambda/Cloud Function removes public access from S3/GCS buckets
- Auto-tagging function adds mandatory tags to untagged resources
Testing the baseline:
- Every baseline change goes through the same PR → review → test pipeline
- Test in a dedicated test account/project first
- Use OPA/Conftest to policy-check the Terraform plan before apply
- Monthly: run a “baseline compliance audit” across all accounts
Scenario 4: New Account Request — End-to-End Flow
Section titled “Scenario 4: New Account Request — End-to-End Flow”Q: “A team requests a new AWS account. What happens from request to ready?”
Model Answer:
Here is the complete flow for Team Kappa requesting a production account:
Step 1 — Request (Day 0, 10 minutes)
Team lead submits a Jira ticket or opens a PR to the aft-account-request repo:
module "team_kappa_prod" { source = "./modules/aft-account-request" control_tower_parameters = { AccountEmail = "aws+team-kappa-prod@bank.com" AccountName = "team-kappa-prod" ManagedOrganizationalUnit = "Workloads/Production" SSOUserEmail = "platform-admin@bank.com" SSOUserFirstName = "Platform" SSOUserLastName = "Admin" } account_tags = { Team = "kappa" Environment = "production" CostCenter = "CC-5678" DataClass = "confidential" } account_customizations_name = "standard-workload"}Step 2 — Approval (Day 0-1)
- For non-prod: auto-approved if tags are valid and CIDR is available
- For prod: platform team lead reviews and approves the PR
- CI pipeline validates: naming convention, tag completeness, CIDR availability in IPAM, budget is set
Step 3 — Provisioning (Day 1, ~15 minutes automated)
AFT pipeline triggers on merge:
CreateManagedAccountAPI → Control Tower creates account in the Production OU- Global customizations run:
- VPC created with /22 from IPAM pool (e.g., 10.3.4.0/22)
- Transit Gateway attachment created → route propagation to hub
- Route added: 0.0.0.0/0 → TGW (all traffic goes through Network Firewall in hub)
- AWS Config recorder started → logs to central S3 in Log Archive
- GuardDuty enabled → findings sent to Security Tooling admin
- Security Hub enabled → findings aggregated
- EBS default encryption → on
- S3 public access block → on
- IAM roles created: PlatformAdmin, TeamDeveloper, ReadOnly
- Account customizations run (for
standard-workload):- EKS-ready VPC subnets tagged for ALB controller
- ECR pull-through cache rule pointing to Shared Services ECR
- Secrets Manager VPC endpoint created
- Provisioning customizations run:
- IAM Identity Center assignment:
team-kappagroup → TeamDeveloper permission set - Route53 private hosted zone:
kappa.internal.bank.com - ArgoCD ApplicationSet in Shared Services creates namespace
team-kappain the management cluster
- IAM Identity Center assignment:
Step 4 — Notification (Day 1, automated)
Slack message to #team-kappa:
New AWS account ready for Team Kappa (Production)- Account ID: 444455556666- Console: https://bank.awsapps.com/start- VPC CIDR: 10.3.4.0/22- Region: ap-southeast-1- Guardrails: Production OU SCPs active (region lock, no public access, no IAM users)- ArgoCD namespace: team-kappa (deploy via GitOps)- Support: #platform-supportStep 5 — Day 2 Operations
- Daily Terraform plan detects if anyone manually changed the baseline
- Config rules evaluate all new resources against compliance standards
- Billing budget alerts at 50%, 80%, 100% of forecast
- GuardDuty and Security Hub findings auto-routed to the security team
Total time from request to ready: ~4-6 hours (mostly waiting for approval; provisioning itself takes ~15 minutes).
References
Section titled “References”- AWS Control Tower Documentation — managed service for setting up and governing multi-account environments
- AWS Prescriptive Guidance: Landing Zones — best practices for designing and building landing zones
- Landing Zone Design in Google Cloud — Architecture Center guide for GCP landing zone patterns
- GCP Resource Manager Documentation — managing organizations, folders, and projects
Tools & Frameworks
Section titled “Tools & Frameworks”- Terraform AWS Control Tower AFT — Account Factory for Terraform, official AWS module for account vending