Landing Zone Architecture Scenarios
Overview
Section titled “Overview”This page covers five progressively complex landing zone architectures. Each pattern builds on the previous one, adding governance, isolation, and scale. The central infrastructure team selects the right pattern based on the organization’s size, compliance requirements, and operational maturity.
Pattern 1: Startup Landing Zone (5-10 Accounts)
Section titled “Pattern 1: Startup Landing Zone (5-10 Accounts)”When to Use
Section titled “When to Use”- Seed or Series A startup, or a small team within a larger org piloting cloud
- 1-3 engineering teams, 5-20 developers
- Single region, single cloud
- Need to move fast but with minimum viable governance
Architecture
Section titled “Architecture”Network (simple — no Transit Gateway yet):
Terraform — Minimal Startup Setup
Section titled “Terraform — Minimal Startup Setup”# Minimal org setup — 5 accountsresource "aws_organizations_organization" "org" { aws_service_access_principals = [ "controltower.amazonaws.com", "guardduty.amazonaws.com", "securityhub.amazonaws.com", "config.amazonaws.com", "sso.amazonaws.com", ] feature_set = "ALL" enabled_policy_types = ["SERVICE_CONTROL_POLICY"]}
# Organizational Unitsresource "aws_organizations_organizational_unit" "security" { name = "Security" parent_id = aws_organizations_organization.org.roots[0].id}
resource "aws_organizations_organizational_unit" "workloads" { name = "Workloads" parent_id = aws_organizations_organization.org.roots[0].id}
resource "aws_organizations_organizational_unit" "shared" { name = "Shared" parent_id = aws_organizations_organization.org.roots[0].id}
resource "aws_organizations_organizational_unit" "sandbox" { name = "Sandbox" parent_id = aws_organizations_organization.org.roots[0].id}
# Accountsresource "aws_organizations_account" "security" { name = "security" email = "aws+security@startup.com" parent_id = aws_organizations_organizational_unit.security.id role_name = "OrganizationAccountAccessRole"
lifecycle { ignore_changes = [role_name] }}
resource "aws_organizations_account" "prod" { name = "production" email = "aws+prod@startup.com" parent_id = aws_organizations_organizational_unit.workloads.id role_name = "OrganizationAccountAccessRole"}
resource "aws_organizations_account" "dev" { name = "development" email = "aws+dev@startup.com" parent_id = aws_organizations_organizational_unit.workloads.id role_name = "OrganizationAccountAccessRole"}
# Minimum viable SCP — deny dangerous actionsresource "aws_organizations_policy" "baseline_guardrails" { name = "baseline-guardrails" description = "Minimum guardrails for all accounts" type = "SERVICE_CONTROL_POLICY"
content = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "DenyRootUser" Effect = "Deny" Action = "*" Resource = "*" Condition = { StringLike = { "aws:PrincipalArn" = "arn:aws:iam::*:root" } } }, { Sid = "DenyLeaveOrganization" Effect = "Deny" Action = "organizations:LeaveOrganization" Resource = "*" } ] })}
resource "aws_organizations_policy_attachment" "baseline" { policy_id = aws_organizations_policy.baseline_guardrails.id target_id = aws_organizations_organization.org.roots[0].id}# Minimal GCP org setupresource "google_folder" "shared_infra" { display_name = "Shared Infrastructure" parent = "organizations/${var.org_id}"}
resource "google_folder" "production" { display_name = "Production" parent = "organizations/${var.org_id}"}
resource "google_folder" "development" { display_name = "Development" parent = "organizations/${var.org_id}"}
resource "google_folder" "sandbox" { display_name = "Sandbox" parent = "organizations/${var.org_id}"}
# Shared VPC host projectmodule "shared_vpc_host" { source = "./modules/project-factory"
project_name = "Shared VPC Host" project_id = "startup-network-hub" folder_id = google_folder.shared_infra.id billing_account_id = var.billing_account_id org_id = var.org_id}
resource "google_compute_shared_vpc_host_project" "host" { project = module.shared_vpc_host.project_id}
# Shared VPC networkresource "google_compute_network" "shared" { name = "shared-vpc" project = module.shared_vpc_host.project_id auto_create_subnetworks = false}
# Prod project as service projectmodule "prod_project" { source = "./modules/project-factory"
project_name = "App Production" project_id = "startup-app-prod" folder_id = google_folder.production.id billing_account_id = var.billing_account_id org_id = var.org_id}
resource "google_compute_shared_vpc_service_project" "prod" { host_project = module.shared_vpc_host.project_id service_project = module.prod_project.project_id}
# Minimum org policiesresource "google_org_policy_policy" "no_external_ip" { name = "${google_folder.production.name}/policies/compute.vmExternalIpAccess" parent = google_folder.production.name
spec { rules { enforce = "TRUE" } }}
resource "google_org_policy_policy" "no_default_sa_grants" { name = "organizations/${var.org_id}/policies/iam.automaticIamGrantsForDefaultServiceAccounts" parent = "organizations/${var.org_id}"
spec { rules { enforce = "TRUE" } }}Design Tradeoffs
Section titled “Design Tradeoffs”| Decision | Startup Choice | Why |
|---|---|---|
| Logging | Org CloudTrail in management account | Simplicity — single bucket |
| Networking | VPC peering (not TGW) | Cost savings, only 2-3 VPCs |
| Security | Single security account | One person can manage |
| Environments | Dev + Prod (no staging) | Speed over process |
| Account factory | Manual Terraform | Not enough accounts to justify AFT |
| IAM | IAM Identity Center with 3-4 permission sets | Simple, covers all use cases |
Pattern 2: Mid-Size Enterprise (50-100 Accounts)
Section titled “Pattern 2: Mid-Size Enterprise (50-100 Accounts)”When to Use
Section titled “When to Use”- Series B-C startup or established enterprise business unit
- 10-30 engineering teams, 50-200 developers
- Multiple regions (primary + DR)
- Need proper environment separation, automated account vending, centralized networking
Architecture
Section titled “Architecture”Network Architecture — Hub and Spoke:
Network Architecture — Shared VPC:
Key Design Decisions at This Scale
Section titled “Key Design Decisions at This Scale”| Decision | Choice | Rationale |
|---|---|---|
| Account vending | AFT (AWS) / Custom TF module (GCP) | 50+ accounts require automation |
| Networking | Transit Gateway (AWS) / Shared VPC (GCP) | Hub-and-spoke scales, VPC peering mesh does not |
| Environments | Dev + Staging + Prod | Full SDLC with gate between each |
| Security aggregation | Delegated admin in Security account | Separation of duties from management account |
| Logging | Dedicated Log Archive with lifecycle policies | Compliance: 7 years retention, Glacier after 90 days |
| CIDR management | VPC IPAM (AWS) / Central subnet allocation (GCP) | Prevent overlaps at 50+ VPCs |
| DR | Multi-region (primary: ap-southeast-1, DR: me-south-1) | RPO < 4h, RTO < 1h for critical workloads |
Pattern 3: Regulated Enterprise (Financial Services / Healthcare)
Section titled “Pattern 3: Regulated Enterprise (Financial Services / Healthcare)”When to Use
Section titled “When to Use”- Banks, insurance companies, healthcare organizations
- PCI-DSS, SOX, HIPAA, GDPR, or local regulations (e.g., UAE NESA, SAMA)
- Data residency requirements (data must stay in specific regions)
- Regular audits by regulators — must demonstrate controls
- 100-300 accounts, multiple compliance zones
Architecture
Section titled “Architecture”PCI Network Isolation:
VPC Service Controls (GCP-specific — critical for regulated environments):
Compliance-Specific Controls
Section titled “Compliance-Specific Controls”| Regulation | AWS Controls | GCP Controls |
|---|---|---|
| PCI-DSS | Security Hub PCI standard, encrypted EBS, CloudTrail integrity validation, Network Firewall logs ALL traffic | VPC Service Controls, CMEK on all storage, Data Access Logs enabled, SCC PCI findings |
| Data Residency | SCP denying all regions except approved | Org policy gcp.resourceLocations restricting to specific regions |
| Key Management | KMS with automatic annual rotation, separate keys per compliance zone | Cloud KMS with rotation, separate keyrings per zone, CMEK for all services |
| Break-Glass | Dedicated IAM user in Identity Account, MFA hardware token, auto-expire session, SNS alert | Dedicated SA with time-limited IAM condition, Cloud Functions alert |
| Audit Trail | CloudTrail with S3 Object Lock (WORM), Config recorder, Access Analyzer | Cloud Audit Logs with locked retention buckets, Access Transparency |
| Network Segmentation | TGW route table isolation, Network Firewall between zones | VPC Service Controls, firewall rules, Private Service Connect |
Break-Glass Access Pattern
Section titled “Break-Glass Access Pattern”Pattern 4: Multi-Cloud Landing Zone (AWS + GCP)
Section titled “Pattern 4: Multi-Cloud Landing Zone (AWS + GCP)”When to Use
Section titled “When to Use”- Enterprise using both AWS and GCP (different teams or different workloads per cloud)
- Avoid vendor lock-in strategy
- Specific workloads suit specific clouds (ML on GCP, mainstream on AWS)
- Post-acquisition: inheriting a second cloud
Architecture
Section titled “Architecture”Unified Identity Flow
Section titled “Unified Identity Flow”Unified Tagging Strategy
Section titled “Unified Tagging Strategy”Both clouds must use the same tagging/labeling taxonomy:
| Tag Key | AWS (Tags) | GCP (Labels) | Example Values |
|---|---|---|---|
team | Team tag | team label | alpha, beta, data-platform |
environment | Environment tag | environment label | dev, staging, prod |
cost-center | CostCenter tag | cost_center label | CC-1234 |
data-classification | DataClassification tag | data_classification label | public, internal, confidential, restricted |
compliance | Compliance tag | compliance label | pci-dss, hipaa, none |
managed-by | ManagedBy tag | managed_by label | platform-team, terraform |
Cross-Cloud Policy with OPA
Section titled “Cross-Cloud Policy with OPA”# Evaluated in CI/CD before terraform apply — works for BOTH clouds
package terraform.analysis
# Deny resources without required tags/labelsdeny[msg] { resource := input.resource_changes[_] required_tags := {"team", "environment", "cost_center", "data_classification"} resource.change.after.tags != null provided_tags := {key | resource.change.after.tags[key]} missing := required_tags - provided_tags count(missing) > 0 msg := sprintf("Resource %s is missing required tags: %v", [resource.address, missing])}
# Deny public-facing resources in PCI-tagged projectsdeny[msg] { resource := input.resource_changes[_] resource.change.after.tags.compliance == "pci-dss" resource.type == "aws_security_group_rule" resource.change.after.cidr_blocks[_] == "0.0.0.0/0" msg := sprintf("PCI resource %s cannot have public CIDR 0.0.0.0/0", [resource.address])}
# Enforce encryption on all storage resources (both clouds)deny[msg] { resource := input.resource_changes[_] resource.type == "aws_s3_bucket" not has_encryption(resource) msg := sprintf("S3 bucket %s must have encryption enabled", [resource.address])}
deny[msg] { resource := input.resource_changes[_] resource.type == "google_storage_bucket" not has_gcp_encryption(resource) msg := sprintf("GCS bucket %s must have CMEK encryption", [resource.address])}Cross-Cloud Networking
Section titled “Cross-Cloud Networking”# AWS side — VPN to GCPresource "aws_vpn_gateway" "gcp_vpn" { vpc_id = module.network_hub.vpc_id
tags = { Name = "vpn-to-gcp" }}
resource "aws_customer_gateway" "gcp" { bgp_asn = 65515 # GCP Cloud Router ASN ip_address = google_compute_ha_vpn_gateway.aws_vpn.vpn_interfaces[0].ip_address type = "ipsec.1"
tags = { Name = "gcp-cloud-router" }}
resource "aws_vpn_connection" "gcp" { vpn_gateway_id = aws_vpn_gateway.gcp_vpn.id customer_gateway_id = aws_customer_gateway.gcp.id type = "ipsec.1"
static_routes_only = false # Use BGP for dynamic routing
tags = { Name = "aws-to-gcp-vpn" }}# GCP side — HA VPN to AWSresource "google_compute_ha_vpn_gateway" "aws_vpn" { name = "vpn-to-aws" project = var.network_hub_project network = google_compute_network.shared_vpc.id region = "me-central1"}
resource "google_compute_router" "vpn_router" { name = "vpn-router-aws" project = var.network_hub_project network = google_compute_network.shared_vpc.id region = "me-central1"
bgp { asn = 65515 }}
resource "google_compute_vpn_tunnel" "aws_tunnel" { name = "vpn-tunnel-to-aws" project = var.network_hub_project region = "me-central1" vpn_gateway = google_compute_ha_vpn_gateway.aws_vpn.id peer_external_gateway = google_compute_external_vpn_gateway.aws.id shared_secret = var.vpn_shared_secret # From Secrets Manager router = google_compute_router.vpn_router.id
vpn_gateway_interface = 0 peer_external_gateway_interface = 0}Pattern 5: Global Enterprise (500+ Accounts, 5+ Regions)
Section titled “Pattern 5: Global Enterprise (500+ Accounts, 5+ Regions)”When to Use
Section titled “When to Use”- Large multinational bank or enterprise
- Presence in multiple geographies (Americas, EMEA, APAC)
- 500+ accounts, 50+ teams, 1000+ developers
- Data sovereignty requirements per region
- Regional compliance regimes (GDPR in EU, NESA in UAE, MAS in Singapore)
Architecture
Section titled “Architecture”Inter-Region Network Architecture:
Delegated Admin Pattern:
At 500+ accounts, a single platform team cannot manage everything. Implement delegated administration:
AWS Delegated Administrator feature:
# Delegate GuardDuty admin to regional security accountresource "aws_guardduty_organization_admin_account" "apac" { admin_account_id = aws_organizations_account.apac_security.id}
# Delegate Config aggregator to audit accountresource "aws_organizations_delegated_administrator" "config" { account_id = aws_organizations_account.audit.id service_principal = "config.amazonaws.com"}
# Delegate Security Hub to security tooling accountresource "aws_organizations_delegated_administrator" "securityhub" { account_id = aws_organizations_account.security_tooling.id service_principal = "securityhub.amazonaws.com"}Account Lifecycle Management at Scale
Section titled “Account Lifecycle Management at Scale”With 500+ accounts, lifecycle management becomes critical:
Centralized vs Federated Security Model
Section titled “Centralized vs Federated Security Model”| Aspect | Centralized | Federated | Recommendation |
|---|---|---|---|
| GuardDuty/SCC | Single admin account for all regions | Regional admin accounts | Federated — regional teams triage regional findings |
| CloudTrail/Audit Logs | Single Log Archive | Regional log archives | Centralized — single source of truth for compliance |
| IAM Identity Center | Single instance (global) | N/A (SSO is global) | Centralized — one SSO, one IdP federation |
| Network Firewall rules | Global rule set | Regional rule sets | Federated — regional compliance needs different rules |
| SCP management | Global platform team | Regional platform teams can request | Centralized — SCPs affect all accounts, changes need global review |
| Account vending | Global pipeline | Regional pipelines | Federated — regional teams vend accounts in their region |
| Incident response | Global SOC | Regional SOC with global escalation | Hybrid — regional first response, global for P1/P2 |
CIDR Planning for Global Scale
Section titled “CIDR Planning for Global Scale”Terraform — Regional Account Factory:
# Each region has its own Terraform workspace and state
variable "region_config" { type = object({ region_name = string aws_region = string cidr_pool = string tgw_id = string ou_prod_id = string ou_nonprod_id = string security_account_id = string })}
variable "teams" { type = list(object({ name = string environments = list(string) budget_prod = number budget_dev = number compliance = string }))}
# Create accounts for all teams in this regionmodule "team_accounts" { source = "../aft-account-request" for_each = { for item in flatten([ for team in var.teams : [ for env in team.environments : { key = "${team.name}-${env}" name = "${var.region_config.region_name}-${team.name}-${env}" email = "aws+${var.region_config.region_name}-${team.name}-${env}@bank.com" ou = env == "prod" ? var.region_config.ou_prod_id : var.region_config.ou_nonprod_id team = team.name env = env budget = env == "prod" ? team.budget_prod : team.budget_dev compliance = team.compliance } ] ]) : item.key => item }
account_name = each.value.name account_email = each.value.email ou_id = each.value.ou tags = { Team = each.value.team Environment = each.value.env Region = var.region_config.region_name Compliance = each.value.compliance }}
# Regional SCP — restrict to this region onlyresource "aws_organizations_policy" "region_lock" { name = "${var.region_config.region_name}-region-lock" type = "SERVICE_CONTROL_POLICY"
content = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "DenyOtherRegions" Effect = "Deny" NotAction = [ "iam:*", "sts:*", "organizations:*", "support:*", "budgets:*", "cloudfront:*", "route53:*", "wafv2:*", "s3:GetBucketLocation", "s3:ListAllMyBuckets", ] Resource = "*" Condition = { StringNotEquals = { "aws:RequestedRegion" = [var.region_config.aws_region] } } } ] })}Cross-Region Network Architecture:
Each region gets its own Shared VPC (host project) for network isolation and compliance boundary enforcement. NCC connects the regional Shared VPCs when cross-region traffic is needed.
Delegated Admin Pattern — GCP Folder-Level IAM:
GCP does not have a “delegated administrator” API like AWS. Instead, use folder-level IAM bindings to give regional teams admin over their folder subtree while the global team retains org-level control.
Folder-level IAM Terraform:
# Regional admin groups — each group gets admin on their folderresource "google_folder_iam_member" "apac_folder_admin" { folder = google_folder.apac.name role = "roles/resourcemanager.folderAdmin" member = "group:platform-apac@bank.com"}
resource "google_folder_iam_member" "apac_network_admin" { folder = google_folder.apac.name role = "roles/compute.networkAdmin" member = "group:platform-apac@bank.com"}
resource "google_folder_iam_member" "emea_folder_admin" { folder = google_folder.emea.name role = "roles/resourcemanager.folderAdmin" member = "group:platform-emea@bank.com"}
# EMEA gets Assured Workloads admin for GDPR complianceresource "google_folder_iam_member" "emea_assured_workloads" { folder = google_folder.emea.name role = "roles/assuredworkloads.admin" member = "group:platform-emea@bank.com"}
resource "google_folder_iam_member" "amer_folder_admin" { folder = google_folder.amer.name role = "roles/resourcemanager.folderAdmin" member = "group:platform-amer@bank.com"}Assured Workloads for Multi-Region Compliance:
Assured Workloads enforces data residency and compliance controls at the folder level. Each region that has regulatory requirements gets its own Assured Workloads environment.
| Region | Compliance Regime | Assured Workloads Type | Data Residency |
|---|---|---|---|
| EMEA | GDPR | EU Regions and Support | europe-west1, europe-west3 only |
| APAC | MAS TRM (Singapore) | Regional Controls | asia-southeast1 only |
| AMER | SOX, FINRA | US Regional Controls | us-east1, us-central1 only |
| UAE/ME | NESA, CBUAE | Custom (org policy) | me-central1, me-central2 only |
# Assured Workloads for EMEA — enforces GDPR data residencyresource "google_assured_workloads_workload" "emea_gdpr" { display_name = "EMEA-GDPR-Workload" compliance_regime = "EU_REGIONS_AND_SUPPORT" billing_account = "billingAccounts/${var.billing_account}" organization = var.org_id location = "europe-west1"
resource_settings { resource_type = "CONSUMER_FOLDER" display_name = "EMEA-GDPR-Projects" }
labels = { compliance = "gdpr" region = "emea" }}Project Lifecycle Management at Scale:
GCP equivalent of AWS account lifecycle — projects are the unit of isolation.
Terraform — Regional Project Factory:
# Each region has its own Terraform workspace and state
variable "region_config" { type = object({ region_name = string gcp_region = string cidr_pool = string host_project_id = string folder_prod_id = string folder_nonprod_id = string billing_account = string })}
variable "teams" { type = list(object({ name = string environments = list(string) budget_prod = number budget_dev = number compliance = string }))}
# Regional folder structureresource "google_folder" "region" { display_name = var.region_config.region_name parent = "organizations/${var.org_id}"}
resource "google_folder" "prod" { display_name = "${var.region_config.region_name}-Production" parent = google_folder.region.name}
resource "google_folder" "nonprod" { display_name = "${var.region_config.region_name}-NonProduction" parent = google_folder.region.name}
# Org policy — restrict resource locations to this regionresource "google_org_policy_policy" "region_lock" { name = "${google_folder.region.name}/policies/gcp.resourceLocations" parent = google_folder.region.name
spec { rules { values { allowed_values = ["in:${var.region_config.gcp_region}-locations"] } } }}
# Create projects for all teams in this regionmodule "team_projects" { source = "../project-factory" for_each = { for item in flatten([ for team in var.teams : [ for env in team.environments : { key = "${team.name}-${env}" name = "${var.region_config.region_name}-${team.name}-${env}" folder = env == "prod" ? google_folder.prod.name : google_folder.nonprod.name team = team.name env = env budget = env == "prod" ? team.budget_prod : team.budget_dev compliance = team.compliance } ] ]) : item.key => item }
project_name = each.value.name project_id = each.value.name folder_id = each.value.folder billing_account = var.region_config.billing_account
labels = { team = each.value.team environment = each.value.env region = lower(var.region_config.region_name) compliance = each.value.compliance }
# Attach as service project to regional Shared VPC shared_vpc_host_project = var.region_config.host_project_id
activate_apis = [ "compute.googleapis.com", "container.googleapis.com", "sqladmin.googleapis.com", "logging.googleapis.com", "monitoring.googleapis.com", "cloudkms.googleapis.com", ]}
# Budget alerts per projectresource "google_billing_budget" "team_budgets" { for_each = module.team_projects
billing_account = var.region_config.billing_account display_name = "Budget: ${each.key}"
budget_filter { projects = ["projects/${each.value.project_number}"] }
amount { specified_amount { currency_code = "USD" units = each.value.budget } }
threshold_rules { threshold_percent = 0.8 spend_basis = "CURRENT_SPEND" } threshold_rules { threshold_percent = 1.0 spend_basis = "CURRENT_SPEND" }}Account Lifecycle Management at Scale
Section titled “Account Lifecycle Management at Scale”With 500+ accounts, lifecycle management becomes critical:
Centralized vs Federated Security Model
Section titled “Centralized vs Federated Security Model”| Aspect | Centralized | Federated | Recommendation |
|---|---|---|---|
| GuardDuty/SCC | Single admin account for all regions | Regional admin accounts | Federated — regional teams triage regional findings |
| CloudTrail/Audit Logs | Single Log Archive | Regional log archives | Centralized — single source of truth for compliance |
| IAM Identity Center | Single instance (global) | N/A (SSO is global) | Centralized — one SSO, one IdP federation |
| Network Firewall rules | Global rule set | Regional rule sets | Federated — regional compliance needs different rules |
| SCP management | Global platform team | Regional platform teams can request | Centralized — SCPs affect all accounts, changes need global review |
| Account vending | Global pipeline | Regional pipelines | Federated — regional teams vend accounts in their region |
| Incident response | Global SOC | Regional SOC with global escalation | Hybrid — regional first response, global for P1/P2 |
CIDR Planning for Global Scale
Section titled “CIDR Planning for Global Scale”Pattern Comparison Matrix
Section titled “Pattern Comparison Matrix”| Aspect | Pattern 1 (Startup) | Pattern 2 (Mid-size) | Pattern 3 (Regulated) | Pattern 4 (Multi-cloud) | Pattern 5 (Global) |
|---|---|---|---|---|---|
| Accounts | 5-10 | 50-100 | 100-300 | 50-200 (per cloud) | 500+ |
| Teams | 1-3 | 10-30 | 20-50 | 10-30 | 50+ |
| Regions | 1 | 1-2 | 1-2 | 1-2 per cloud | 5+ |
| Networking | VPC peering | Transit Gateway | TGW + route isolation | TGW + VPN cross-cloud | TGW per region + peering |
| Identity | IAM Identity Center | Identity Center + IdP | Identity Center + MFA + break-glass | Okta → both clouds | Identity Center + delegated |
| Logging | Same-account S3 | Dedicated Log Archive | WORM storage + 7yr retention | SIEM aggregating both | Regional + global aggregation |
| Security | GuardDuty basic | GuardDuty + Security Hub | + Macie + Detective + forensics | + OPA cross-cloud policy | + delegated admin per region |
| Compliance | None formal | SOC2 | PCI-DSS, SOX, HIPAA | Unified tagging + OPA | Per-region regulatory |
| Account vending | Manual TF | AFT | AFT + compliance checks | AFT + GCP project factory | Regional AFT instances |
| Setup time | 1-2 days | 1-2 weeks | 4-8 weeks | 4-6 weeks | 8-16 weeks |
| Team size | 1-2 | 3-5 | 5-10 | 5-10 | 15-25 (global + regional) |
Interview Scenario: Choosing a Pattern
Section titled “Interview Scenario: Choosing a Pattern”Q: “You are hired as a Cloud Architect for a mid-size bank in Dubai with 15 engineering teams. They are on-prem today and want to move to cloud. Design the landing zone.”
Model Answer:
I would recommend Pattern 3 (Regulated) with elements of Pattern 2 (Mid-size) for the initial implementation, designed to scale to Pattern 5 if the bank expands regionally.
Why Pattern 3 and not Pattern 2: Even though the bank is mid-size (15 teams, likely 60-80 accounts), it is a regulated financial institution. The UAE Central Bank (CBUAE) and NESA impose requirements for data residency, audit trails, encryption, and network segmentation. Starting with Pattern 2 and retrofitting compliance later is 3-5x more expensive than building it right from the start.
Recommended architecture:
Implementation plan:
- Week 1-2: Management account, Control Tower, Security OU accounts, SCPs
- Week 3-4: Network Hub (TGW, Network Firewall, Direct Connect to on-prem DC in Dubai)
- Week 5-6: AFT setup, account factory module, global baseline (VPC + IAM + logging + monitoring)
- Week 7-8: PCI zone (isolated TGW route table, stricter SCPs, Macie for data scanning)
- Week 9-10: Vend all 60-80 workload accounts, SSO setup, team onboarding
- Week 11-12: Testing, compliance audit dry-run, documentation
Data residency: SCP denying all regions except me-south-1 (Bahrain) and optionally me-central-1 (Qatar) for DR. This satisfies CBUAE data residency requirements.
Total: 12 weeks to production-ready, 1 dedicated platform team of 4-5 engineers.
References
Section titled “References”- Customizations for AWS Control Tower (CfCT) — extending Control Tower with custom guardrails and baselines
- AWS Well-Architected Framework — foundational questions for evaluating cloud architectures
- AWS Security Reference Architecture — multi-account security architecture patterns
- GCP Security Foundations Blueprint — enterprise foundation blueprint with defense-in-depth security
- Google Cloud Architecture Framework — Well-Architected guidance for GCP workloads
Tools & Frameworks
Section titled “Tools & Frameworks”- Terraform Google Cloud Foundation Toolkit — reference Terraform for GCP enterprise foundations