ingress-nginx Is Dead, Crossplane Graduates, Nutanix Goes Bare Metal
ingress-nginx Archived: Gateway API v1.4 Is the Mandatory Migration Path
- The ingress-nginx project was officially archived on March 24, 2026 following CVE-2025-1974 (CVSS 9.8 unauthenticated RCE, "IngressNightmare") plus four additional HIGH-severity CVEs covering config injection, path injection, and annotation abuse — this also completes the transition Broadcom previewed for VKS 3.6, where AVI Load Balancer replaced NGINX Ingress (covered last issue).
- Gateway API v1.4 is the designated successor. Its three-layer model —
GatewayClass(infra admins),Gateway(platform teams),HTTPRoute(app devs) — replaces unstructured ingress annotations with typed CRD fields and enables cross-namespace routing viaReferenceGrant. ingress2gatewayv1.0 automates ~85% of migration (brew install ingress2gatewayor Go). Supported target implementations: Envoy Gateway, Cilium Gateway, kgateway, NGINX Gateway Fabric, Istio Waypoint. Custom Lua configs, rate limiting, WAF, and session affinity require manual reimplementation.
Crossplane Graduates at CNCF
- Crossplane has reached CNCF Graduated status, announced at KubeCon EU Amsterdam. Bassam Tabbara (Upbound) described the project as managing "hundreds of millions of infrastructure" on behalf of Fortune 10, 50, and 100 companies via autonomous declarative control loops.
- The graduation case is built on ecosystem breadth: thousands of maintainers managing distinct providers, functions, and integrations across AWS, GCP, and Azure — making critical mass slow to achieve but now established.
- Active investment areas post-graduation: developer experience for building control planes, provider ecosystem coverage velocity, and mixing deterministic and probabilistic (LLM-based) controllers within the same control plane for agentic AI workloads.
Nutanix .NEXT 2026: Bare-Metal Kubernetes and Multi-Cloud Expansions
- NKP Metal enters early access for NKP Pro/Ultimate license holders; GA is targeted for H2 2026. Workloads run directly on bare metal without a hypervisor layer — designed for edge environments and dense GPU AI training. Storage via CSI or Cloud Native AOS.
- NC2 on Google Cloud gains Hyperdisk and C3 bare-metal instance support, decoupling storage and compute scaling. NC2 on AWS GovCloud is now GA; AWS European Sovereign Cloud support is slated for later in 2026.
- NCM 2.0 is GA with a redesigned architecture for managing large cluster counts across multiple Prism Central instances, plus on-premises Cost Governance. NCP Zero-Copy Migrations enable near-instantaneous, in-place VMware vSphere Virtual Volumes → AHV vDisk conversion without data movement.
KubeCon EU Final Day: GPU Sharing, Agones, and Closed-Loop Automation
- Agones was confirmed as a CNCF Sandbox project; Ubisoft demoed it running Rainbow 6 Mobile with a "build once, deploy everywhere" strategy that preserves in-memory session state without crashing active game sessions during node migrations.
- The HAMi project demoed GPU slicing, MIG partitions, and custom memory isolation for shared hardware — enabling multiple workloads to share GPU resources without starvation, relevant to the multi-tenant inference infrastructure patterns emerging on Kubernetes (GPU management was the dominant KubeCon EU theme per last issue).
- Nokia demoed autonomous infrastructure managing 70,000+ core private clouds using CNCF projects with KPT for closed-loop automation. Perses, an EU-funded visualization tool, was presented as a reusable React component library built on an open specification for sharing dashboards across observability stacks without rebuilding per-team.
- HAProxy Universal Mesh was introduced as an architecture targeting heterogeneous environments — legacy VMs alongside multi-cloud Kubernetes — for compliance-preserving high availability across mixed infrastructure.
Kubernetes Rightsizing: HPA-Aware Optimization Enters GitOps Pipelines
- Akamas announced HPA-aware optimization at KubeCon EMEA 2026 that surfaces rightsize recommendations as pull requests into GitOps pipelines — treating cluster autoscaler configuration and workload sizing as a single coupled problem rather than separate concerns.
- The architectural principle surfaced: autoscaling misconfigured workloads amplifies resource waste; sizing must precede scaling to avoid HPA reacting to artificial load from undersized pods rather than genuine demand signals.
Get Platform and Infra Briefing in your inbox
Subscribe to receive new issues as they're published.