AWS Launches EC2 P6-B300 Instances with NVIDIA Blackwell Ultra GPUs
Executive Summary
Amazon Web Services (AWS) has announced the general availability of its next-generation, GPU-accelerated Amazon EC2 P6-B300 instances. Powered by NVIDIA Blackwell Ultra GPUs, these instances are specifically designed for training and serving large-scale AI models, including those with trillions of parameters or using complex techniques like Mixture of Experts (MoE). Compared to previous generations, the P6-B300 instances offer significantly increased networking bandwidth and GPU memory to provide a balanced platform for the most demanding AI workloads.
Key Takeaways
* Product: Amazon EC2 P6-B300 instances.
* Core Hardware: Each instance is equipped with 8x NVIDIA B300 "Blackwell Ultra" GPUs.
* Performance Uplift: Delivers 2x more networking bandwidth (6.4 Tbps EFA) and 1.5x more GPU memory (2,144 GB HBM3e) compared to previous generations.
* Target Workloads: Ideal for distributed training of trillion-parameter models, Mixture of Experts (MoE) architectures, and multimodal processing.
* Key Specifications: Includes 192 vCPUs, 4TB of system memory, 6.4 Tbps EFA networking, and 300 Gbps ENA networking per instance.
* Availability: Now available in the US West (Oregon) AWS Region through Amazon EC2 Capacity Blocks for ML and Savings Plans. On-demand reservations require contacting an account manager.
Strategic Importance
This launch solidifies AWS's position as a top-tier provider for high-performance AI infrastructure, directly competing for customers with the most demanding AI training needs by offering NVIDIA's latest GPU architecture.