Creating Kubernetes Clusters on Azure
Use Terraform’s azurerm_kubernetes_cluster
resource or the official AKS module to provision production-ready clusters. The module handles node pools, networking, and diagnostics without handcrafting every dependency.
Minimum Viable Configuration (Excerpt)
resource "azurerm_resource_group" "aks" {
name = var.resource_group_name
location = var.location
}
module "aks" {
source = "Azure/aks/azurerm"
version = "~> 6.0"
resource_group_name = azurerm_resource_group.aks.name
kubernetes_version = var.kubernetes_version
prefix = var.cluster_name
default_node_pool_name = "system"
default_node_pool_vm_size = "Standard_D4s_v3"
default_node_pool_node_count = 3
vnet_subnet_id = azurerm_subnet.aks.id
}
Operational Checklist
- Identity: Prefer managed identities over service principals for AKS control plane access.
- Networking: Allocate subnet CIDRs that leave headroom for additional node pools; enable Azure CNI if you need routable pod IPs.
- Security: Enforce Azure RBAC, enable Azure AD integration, and configure Azure Policy for Kubernetes (baseline restrictions, pod security).
- Observability: Enable diagnostic settings to ship control-plane logs and metrics to Log Analytics or Azure Monitor.
Next Steps
- Automate
terraform plan
/apply
via Azure DevOps or GitHub Actions with remote state stored in Azure Storage + key vault-managed secrets. - Rotate node pool images regularly (e.g., via node image upgrades) to pick up security patches.
- Review Azure’s AKS baseline reference architecture for production hardening guidance.