Skip to content

Landing Zone Architecture Scenarios

This page covers five progressively complex landing zone architectures. Each pattern builds on the previous one, adding governance, isolation, and scale. The central infrastructure team selects the right pattern based on the organization’s size, compliance requirements, and operational maturity.

Five landing zone patterns — from startup to global enterprise


Pattern 1: Startup Landing Zone (5-10 Accounts)

Section titled “Pattern 1: Startup Landing Zone (5-10 Accounts)”
  • Seed or Series A startup, or a small team within a larger org piloting cloud
  • 1-3 engineering teams, 5-20 developers
  • Single region, single cloud
  • Need to move fast but with minimum viable governance

Pattern 1 — Startup AWS organization structure

Network (simple — no Transit Gateway yet):

Pattern 1 — Startup network topology

# Minimal org setup — 5 accounts
resource "aws_organizations_organization" "org" {
aws_service_access_principals = [
"controltower.amazonaws.com",
"guardduty.amazonaws.com",
"securityhub.amazonaws.com",
"config.amazonaws.com",
"sso.amazonaws.com",
]
feature_set = "ALL"
enabled_policy_types = ["SERVICE_CONTROL_POLICY"]
}
# Organizational Units
resource "aws_organizations_organizational_unit" "security" {
name = "Security"
parent_id = aws_organizations_organization.org.roots[0].id
}
resource "aws_organizations_organizational_unit" "workloads" {
name = "Workloads"
parent_id = aws_organizations_organization.org.roots[0].id
}
resource "aws_organizations_organizational_unit" "shared" {
name = "Shared"
parent_id = aws_organizations_organization.org.roots[0].id
}
resource "aws_organizations_organizational_unit" "sandbox" {
name = "Sandbox"
parent_id = aws_organizations_organization.org.roots[0].id
}
# Accounts
resource "aws_organizations_account" "security" {
name = "security"
email = "aws+security@startup.com"
parent_id = aws_organizations_organizational_unit.security.id
role_name = "OrganizationAccountAccessRole"
lifecycle {
ignore_changes = [role_name]
}
}
resource "aws_organizations_account" "prod" {
name = "production"
email = "aws+prod@startup.com"
parent_id = aws_organizations_organizational_unit.workloads.id
role_name = "OrganizationAccountAccessRole"
}
resource "aws_organizations_account" "dev" {
name = "development"
email = "aws+dev@startup.com"
parent_id = aws_organizations_organizational_unit.workloads.id
role_name = "OrganizationAccountAccessRole"
}
# Minimum viable SCP — deny dangerous actions
resource "aws_organizations_policy" "baseline_guardrails" {
name = "baseline-guardrails"
description = "Minimum guardrails for all accounts"
type = "SERVICE_CONTROL_POLICY"
content = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "DenyRootUser"
Effect = "Deny"
Action = "*"
Resource = "*"
Condition = {
StringLike = {
"aws:PrincipalArn" = "arn:aws:iam::*:root"
}
}
},
{
Sid = "DenyLeaveOrganization"
Effect = "Deny"
Action = "organizations:LeaveOrganization"
Resource = "*"
}
]
})
}
resource "aws_organizations_policy_attachment" "baseline" {
policy_id = aws_organizations_policy.baseline_guardrails.id
target_id = aws_organizations_organization.org.roots[0].id
}
DecisionStartup ChoiceWhy
LoggingOrg CloudTrail in management accountSimplicity — single bucket
NetworkingVPC peering (not TGW)Cost savings, only 2-3 VPCs
SecuritySingle security accountOne person can manage
EnvironmentsDev + Prod (no staging)Speed over process
Account factoryManual TerraformNot enough accounts to justify AFT
IAMIAM Identity Center with 3-4 permission setsSimple, covers all use cases

Pattern 2: Mid-Size Enterprise (50-100 Accounts)

Section titled “Pattern 2: Mid-Size Enterprise (50-100 Accounts)”
  • Series B-C startup or established enterprise business unit
  • 10-30 engineering teams, 50-200 developers
  • Multiple regions (primary + DR)
  • Need proper environment separation, automated account vending, centralized networking

Pattern 2 — Mid-size enterprise AWS organization

Network Architecture — Hub and Spoke:

Pattern 2 — Hub-and-spoke network architecture

DecisionChoiceRationale
Account vendingAFT (AWS) / Custom TF module (GCP)50+ accounts require automation
NetworkingTransit Gateway (AWS) / Shared VPC (GCP)Hub-and-spoke scales, VPC peering mesh does not
EnvironmentsDev + Staging + ProdFull SDLC with gate between each
Security aggregationDelegated admin in Security accountSeparation of duties from management account
LoggingDedicated Log Archive with lifecycle policiesCompliance: 7 years retention, Glacier after 90 days
CIDR managementVPC IPAM (AWS) / Central subnet allocation (GCP)Prevent overlaps at 50+ VPCs
DRMulti-region (primary: ap-southeast-1, DR: me-south-1)RPO < 4h, RTO < 1h for critical workloads

Pattern 3: Regulated Enterprise (Financial Services / Healthcare)

Section titled “Pattern 3: Regulated Enterprise (Financial Services / Healthcare)”
  • Banks, insurance companies, healthcare organizations
  • PCI-DSS, SOX, HIPAA, GDPR, or local regulations (e.g., UAE NESA, SAMA)
  • Data residency requirements (data must stay in specific regions)
  • Regular audits by regulators — must demonstrate controls
  • 100-300 accounts, multiple compliance zones

Pattern 3 — Regulated enterprise AWS organization with PCI zone

PCI Network Isolation:

Pattern 3 — PCI network isolation via TGW route tables

RegulationAWS ControlsGCP Controls
PCI-DSSSecurity Hub PCI standard, encrypted EBS, CloudTrail integrity validation, Network Firewall logs ALL trafficVPC Service Controls, CMEK on all storage, Data Access Logs enabled, SCC PCI findings
Data ResidencySCP denying all regions except approvedOrg policy gcp.resourceLocations restricting to specific regions
Key ManagementKMS with automatic annual rotation, separate keys per compliance zoneCloud KMS with rotation, separate keyrings per zone, CMEK for all services
Break-GlassDedicated IAM user in Identity Account, MFA hardware token, auto-expire session, SNS alertDedicated SA with time-limited IAM condition, Cloud Functions alert
Audit TrailCloudTrail with S3 Object Lock (WORM), Config recorder, Access AnalyzerCloud Audit Logs with locked retention buckets, Access Transparency
Network SegmentationTGW route table isolation, Network Firewall between zonesVPC Service Controls, firewall rules, Private Service Connect

Break-glass access pattern — normal vs emergency


Pattern 4: Multi-Cloud Landing Zone (AWS + GCP)

Section titled “Pattern 4: Multi-Cloud Landing Zone (AWS + GCP)”
  • Enterprise using both AWS and GCP (different teams or different workloads per cloud)
  • Avoid vendor lock-in strategy
  • Specific workloads suit specific clouds (ML on GCP, mainstream on AWS)
  • Post-acquisition: inheriting a second cloud

Pattern 4 — Multi-cloud landing zone with unified control plane

Unified identity flow — Okta to AWS and GCP

Both clouds must use the same tagging/labeling taxonomy:

Tag KeyAWS (Tags)GCP (Labels)Example Values
teamTeam tagteam labelalpha, beta, data-platform
environmentEnvironment tagenvironment labeldev, staging, prod
cost-centerCostCenter tagcost_center labelCC-1234
data-classificationDataClassification tagdata_classification labelpublic, internal, confidential, restricted
complianceCompliance tagcompliance labelpci-dss, hipaa, none
managed-byManagedBy tagmanaged_by labelplatform-team, terraform
policy/terraform-plan.rego
# Evaluated in CI/CD before terraform apply — works for BOTH clouds
package terraform.analysis
# Deny resources without required tags/labels
deny[msg] {
resource := input.resource_changes[_]
required_tags := {"team", "environment", "cost_center", "data_classification"}
resource.change.after.tags != null
provided_tags := {key | resource.change.after.tags[key]}
missing := required_tags - provided_tags
count(missing) > 0
msg := sprintf("Resource %s is missing required tags: %v", [resource.address, missing])
}
# Deny public-facing resources in PCI-tagged projects
deny[msg] {
resource := input.resource_changes[_]
resource.change.after.tags.compliance == "pci-dss"
resource.type == "aws_security_group_rule"
resource.change.after.cidr_blocks[_] == "0.0.0.0/0"
msg := sprintf("PCI resource %s cannot have public CIDR 0.0.0.0/0", [resource.address])
}
# Enforce encryption on all storage resources (both clouds)
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := sprintf("S3 bucket %s must have encryption enabled", [resource.address])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "google_storage_bucket"
not has_gcp_encryption(resource)
msg := sprintf("GCS bucket %s must have CMEK encryption", [resource.address])
}
# AWS side — VPN to GCP
resource "aws_vpn_gateway" "gcp_vpn" {
vpc_id = module.network_hub.vpc_id
tags = {
Name = "vpn-to-gcp"
}
}
resource "aws_customer_gateway" "gcp" {
bgp_asn = 65515 # GCP Cloud Router ASN
ip_address = google_compute_ha_vpn_gateway.aws_vpn.vpn_interfaces[0].ip_address
type = "ipsec.1"
tags = {
Name = "gcp-cloud-router"
}
}
resource "aws_vpn_connection" "gcp" {
vpn_gateway_id = aws_vpn_gateway.gcp_vpn.id
customer_gateway_id = aws_customer_gateway.gcp.id
type = "ipsec.1"
static_routes_only = false # Use BGP for dynamic routing
tags = {
Name = "aws-to-gcp-vpn"
}
}
# GCP side — HA VPN to AWS
resource "google_compute_ha_vpn_gateway" "aws_vpn" {
name = "vpn-to-aws"
project = var.network_hub_project
network = google_compute_network.shared_vpc.id
region = "me-central1"
}
resource "google_compute_router" "vpn_router" {
name = "vpn-router-aws"
project = var.network_hub_project
network = google_compute_network.shared_vpc.id
region = "me-central1"
bgp {
asn = 65515
}
}
resource "google_compute_vpn_tunnel" "aws_tunnel" {
name = "vpn-tunnel-to-aws"
project = var.network_hub_project
region = "me-central1"
vpn_gateway = google_compute_ha_vpn_gateway.aws_vpn.id
peer_external_gateway = google_compute_external_vpn_gateway.aws.id
shared_secret = var.vpn_shared_secret # From Secrets Manager
router = google_compute_router.vpn_router.id
vpn_gateway_interface = 0
peer_external_gateway_interface = 0
}

Pattern 5: Global Enterprise (500+ Accounts, 5+ Regions)

Section titled “Pattern 5: Global Enterprise (500+ Accounts, 5+ Regions)”
  • Large multinational bank or enterprise
  • Presence in multiple geographies (Americas, EMEA, APAC)
  • 500+ accounts, 50+ teams, 1000+ developers
  • Data sovereignty requirements per region
  • Regional compliance regimes (GDPR in EU, NESA in UAE, MAS in Singapore)

Pattern 5 — Global enterprise AWS architecture with regional isolation

Inter-Region Network Architecture:

Inter-region TGW peering architecture

Delegated Admin Pattern:

At 500+ accounts, a single platform team cannot manage everything. Implement delegated administration:

Delegated administration — global and regional platform teams

AWS Delegated Administrator feature:

# Delegate GuardDuty admin to regional security account
resource "aws_guardduty_organization_admin_account" "apac" {
admin_account_id = aws_organizations_account.apac_security.id
}
# Delegate Config aggregator to audit account
resource "aws_organizations_delegated_administrator" "config" {
account_id = aws_organizations_account.audit.id
service_principal = "config.amazonaws.com"
}
# Delegate Security Hub to security tooling account
resource "aws_organizations_delegated_administrator" "securityhub" {
account_id = aws_organizations_account.security_tooling.id
service_principal = "securityhub.amazonaws.com"
}

With 500+ accounts, lifecycle management becomes critical:

Account lifecycle — create, baseline, active, decommission

AspectCentralizedFederatedRecommendation
GuardDuty/SCCSingle admin account for all regionsRegional admin accountsFederated — regional teams triage regional findings
CloudTrail/Audit LogsSingle Log ArchiveRegional log archivesCentralized — single source of truth for compliance
IAM Identity CenterSingle instance (global)N/A (SSO is global)Centralized — one SSO, one IdP federation
Network Firewall rulesGlobal rule setRegional rule setsFederated — regional compliance needs different rules
SCP managementGlobal platform teamRegional platform teams can requestCentralized — SCPs affect all accounts, changes need global review
Account vendingGlobal pipelineRegional pipelinesFederated — regional teams vend accounts in their region
Incident responseGlobal SOCRegional SOC with global escalationHybrid — regional first response, global for P1/P2

Global CIDR allocation — APAC, EMEA, AMER regions

Terraform — Regional Account Factory:

modules/regional-account-factory/main.tf
# Each region has its own Terraform workspace and state
variable "region_config" {
type = object({
region_name = string
aws_region = string
cidr_pool = string
tgw_id = string
ou_prod_id = string
ou_nonprod_id = string
security_account_id = string
})
}
variable "teams" {
type = list(object({
name = string
environments = list(string)
budget_prod = number
budget_dev = number
compliance = string
}))
}
# Create accounts for all teams in this region
module "team_accounts" {
source = "../aft-account-request"
for_each = { for item in flatten([
for team in var.teams : [
for env in team.environments : {
key = "${team.name}-${env}"
name = "${var.region_config.region_name}-${team.name}-${env}"
email = "aws+${var.region_config.region_name}-${team.name}-${env}@bank.com"
ou = env == "prod" ? var.region_config.ou_prod_id : var.region_config.ou_nonprod_id
team = team.name
env = env
budget = env == "prod" ? team.budget_prod : team.budget_dev
compliance = team.compliance
}
]
]) : item.key => item }
account_name = each.value.name
account_email = each.value.email
ou_id = each.value.ou
tags = {
Team = each.value.team
Environment = each.value.env
Region = var.region_config.region_name
Compliance = each.value.compliance
}
}
# Regional SCP — restrict to this region only
resource "aws_organizations_policy" "region_lock" {
name = "${var.region_config.region_name}-region-lock"
type = "SERVICE_CONTROL_POLICY"
content = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "DenyOtherRegions"
Effect = "Deny"
NotAction = [
"iam:*",
"sts:*",
"organizations:*",
"support:*",
"budgets:*",
"cloudfront:*",
"route53:*",
"wafv2:*",
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
]
Resource = "*"
Condition = {
StringNotEquals = {
"aws:RequestedRegion" = [var.region_config.aws_region]
}
}
}
]
})
}

With 500+ accounts, lifecycle management becomes critical:

Account lifecycle — create, baseline, active, decommission

AspectCentralizedFederatedRecommendation
GuardDuty/SCCSingle admin account for all regionsRegional admin accountsFederated — regional teams triage regional findings
CloudTrail/Audit LogsSingle Log ArchiveRegional log archivesCentralized — single source of truth for compliance
IAM Identity CenterSingle instance (global)N/A (SSO is global)Centralized — one SSO, one IdP federation
Network Firewall rulesGlobal rule setRegional rule setsFederated — regional compliance needs different rules
SCP managementGlobal platform teamRegional platform teams can requestCentralized — SCPs affect all accounts, changes need global review
Account vendingGlobal pipelineRegional pipelinesFederated — regional teams vend accounts in their region
Incident responseGlobal SOCRegional SOC with global escalationHybrid — regional first response, global for P1/P2

Global CIDR allocation — APAC, EMEA, AMER regions


AspectPattern 1 (Startup)Pattern 2 (Mid-size)Pattern 3 (Regulated)Pattern 4 (Multi-cloud)Pattern 5 (Global)
Accounts5-1050-100100-30050-200 (per cloud)500+
Teams1-310-3020-5010-3050+
Regions11-21-21-2 per cloud5+
NetworkingVPC peeringTransit GatewayTGW + route isolationTGW + VPN cross-cloudTGW per region + peering
IdentityIAM Identity CenterIdentity Center + IdPIdentity Center + MFA + break-glassOkta → both cloudsIdentity Center + delegated
LoggingSame-account S3Dedicated Log ArchiveWORM storage + 7yr retentionSIEM aggregating bothRegional + global aggregation
SecurityGuardDuty basicGuardDuty + Security Hub+ Macie + Detective + forensics+ OPA cross-cloud policy+ delegated admin per region
ComplianceNone formalSOC2PCI-DSS, SOX, HIPAAUnified tagging + OPAPer-region regulatory
Account vendingManual TFAFTAFT + compliance checksAFT + GCP project factoryRegional AFT instances
Setup time1-2 days1-2 weeks4-8 weeks4-6 weeks8-16 weeks
Team size1-23-55-105-1015-25 (global + regional)

Q: “You are hired as a Cloud Architect for a mid-size bank in Dubai with 15 engineering teams. They are on-prem today and want to move to cloud. Design the landing zone.”

Model Answer:

I would recommend Pattern 3 (Regulated) with elements of Pattern 2 (Mid-size) for the initial implementation, designed to scale to Pattern 5 if the bank expands regionally.

Why Pattern 3 and not Pattern 2: Even though the bank is mid-size (15 teams, likely 60-80 accounts), it is a regulated financial institution. The UAE Central Bank (CBUAE) and NESA impose requirements for data residency, audit trails, encryption, and network segmentation. Starting with Pattern 2 and retrofitting compliance later is 3-5x more expensive than building it right from the start.

Recommended architecture:

Recommended architecture — Dubai bank landing zone

Implementation plan:

  1. Week 1-2: Management account, Control Tower, Security OU accounts, SCPs
  2. Week 3-4: Network Hub (TGW, Network Firewall, Direct Connect to on-prem DC in Dubai)
  3. Week 5-6: AFT setup, account factory module, global baseline (VPC + IAM + logging + monitoring)
  4. Week 7-8: PCI zone (isolated TGW route table, stricter SCPs, Macie for data scanning)
  5. Week 9-10: Vend all 60-80 workload accounts, SSO setup, team onboarding
  6. Week 11-12: Testing, compliance audit dry-run, documentation

Data residency: SCP denying all regions except me-south-1 (Bahrain) and optionally me-central-1 (Qatar) for DR. This satisfies CBUAE data residency requirements.

Total: 12 weeks to production-ready, 1 dedicated platform team of 4-5 engineers.