In the evolving landscape of cloud computing, 2025 proved to be a pivotal year for DigitalOcean’s Managed Kubernetes platform. The year was characterized by a series of substantial updates aimed at optimizing the platform’s simplicity, security, and scalability. These enhancements were designed to offer developers and businesses more robust capabilities while minimizing operational burdens. Whether users are managing production workloads, experimenting with microservices, or scaling customer-facing applications, the innovations introduced throughout the year have made deploying, managing, and optimizing Kubernetes on DigitalOcean more accessible than ever. This comprehensive overview will delve into the significant updates that have shaped the platform over the past year, empowering developers to accelerate their processes with confidence.
Major March Releases
In March, DigitalOcean introduced four significant upgrades to its Kubernetes service, known as DigitalOcean Kubernetes (DOKS). These enhancements were designed to facilitate the handling of larger and more efficient workloads:
Increased Cluster Capacity
The capacity of clusters was expanded from 500 to 1,000 worker nodes. This development allows larger applications to operate within a single cluster, eliminating the complexity of managing multiple environments.
VPC-native Networking
With the introduction of VPC-native Kubernetes, IP addresses are now assigned directly from users’ Virtual Private Clouds (VPCs). This improvement enhances performance and simplifies communication with other cloud resources, offering a more streamlined networking experience.
eBPF-powered Networking
The traditional kube-proxy has been replaced with eBPF-based networking and routing. This change results in faster packet processing and reduced latency, which is particularly beneficial for high-traffic and real-time workloads.
Managed Cilium with Hubble
The integration of Managed Cilium with Hubble enhances observability, security, and modern networking. It simplifies troubleshooting, providing developers with clearer visibility and a more straightforward networking stack.
Collectively, these features significantly boost the scalability, performance, and reliability of DOKS. Developers gain enhanced visibility, while businesses enjoy reduced overhead and improved scalability for applications.
Innovations in July
July saw the introduction of several powerful features aimed at improving the efficiency of applications deployed on DigitalOcean Kubernetes, particularly those involving AI and machine learning.
Introduction of GPU Droplet Types
DigitalOcean introduced four new types of GPU droplets to support various use cases:
- The NVIDIA RTX 4000 Ada Generation GPU is ideal for content creation, 3D modeling, rendering, and video workflows, offering exceptional performance and efficiency.
- The NVIDIA RTX 6000 Ada Generation GPU is perfect for rendering, virtual workstations, and AI-related tasks.
- The NVIDIA L40s GPU serves purposes that include graphics, rendering, and video streaming.
- The AMD MI300X GPU supports advanced AI inference and High-Performance Computing (HPC) workloads, combining powerful compute cores with high memory bandwidth.
Nodepool Scale-to-zero Feature
The Nodepool Scale-to-zero capability allows node pools to automatically scale down to zero nodes when no active workloads are present. This innovation eliminates compute charges during inactive periods, making it particularly beneficial for development/testing environments and applications with sporadic usage.
Launch of the Atlanta Datacenter (ATL1)
A new AI-optimized datacenter, ATL1, was launched in Atlanta. This facility is the largest and most advanced, designed for high-density GPU infrastructure to support demanding AI and machine learning workloads. It offers faster response times and reduced latency for applications that are sensitive to delays.
DOKS Routing Agent
The DOKS Routing Agent is a fully-managed solution that simplifies static route configuration within Kubernetes clusters. It supports Kubernetes custom resources, facilitating the definition of custom routes, the use of Equal-Cost Multi-Path (ECMP) routing across multiple gateways, and the ability to override default routes without interrupting connectivity.
Advanced Capabilities and Features
DigitalOcean also introduced the DigitalOcean MCP (Model Context Protocol) Server, enabling users to manage cloud resources through natural-language commands via AI-powered tools. This innovation simplifies tasks like provisioning Managed Databases, making cloud operations faster and more intuitive. The integration of AI into containerized applications marks a significant step forward, allowing for natural-language automation and reducing operational overhead.
Kubernetes Gateway API
As a managed service, the Kubernetes Gateway API is pre-installed in all DOKS clusters, available at no additional cost. This next-generation traffic management solution is more expressive, extensible, and powerful than traditional Ingress. It leverages Cilium’s high-performance eBPF implementation, offering superior performance and advanced routing capabilities without the traditional proxy overhead.
Priority Expander for Cluster Autoscaler
The Priority Expander feature for the DOKS Cluster Autoscaler allows workloads to scale automatically across multiple node pools in a priority order. This automation eliminates the need for manual intervention to add capacity, simplifying workload management.
VPC NAT Gateway
The VPC NAT Gateway enables Kubernetes workloads in private subnets to securely access the internet for outbound operations without exposing them to inbound traffic. By routing traffic through a managed NAT gateway with its own public IP, DOKS simplifies network architecture and strengthens security.
Network File Storage (NFS)
Network File Storage provides a scalable, high-availability shared file system that can be mounted across multiple pods and nodes in Kubernetes clusters. This feature simplifies stateful application deployment on DOKS, allowing for data persistence and sharing across workloads.
Multi-node GPU Configurations
DOKS supports multi-node GPU configurations, allowing users to deploy scalable, GPU-powered workloads across multiple nodes seamlessly. This capability facilitates running high-performance applications such as machine learning training, data processing, or GPU-intensive containerized workloads.
Looking Ahead
DigitalOcean’s efforts throughout 2025 have made its managed Kubernetes offering more straightforward, scalable, and performant. However, this journey is far from over. There are exciting developments on the horizon for 2026, promising even more enhancements and innovations to support developers and businesses in their cloud computing endeavors. Stay connected with us to learn more about the advancements coming your way.
For more Information, Refer to this article.

































