While Amazon Web Services (AWS) has steadily increased the availability of different services on its public cloud platform over the years, at its core, its primary service remains server compute capacity.
At the 2018 AWS:reinvent conference, the cloud computing giant made a series of announcements expanding the compute options for its users.
ARM-Powered AWS EC2 A1 Instances
Among the new AWS services is the EC2 A1 instances, which are powered by a new class of ARM-based Graviton processors. The Graviton is a custom-built ARM processor that AWS has designed, featuring 64-bit ARM Neoverse processors.
"AWS Graviton processors are a new line of processors that are custom designed by AWS utilizing Amazon’s extensive expertise in building platform solutions for cloud applications running at scale," AWS stated in a media advisory. "These processors deliver targeted power, performance, and cost optimizations."
The A1 instances are a smaller footprint processor that start with a single virtual CPU (vCPU) and 2 GB of memory. The A1 currently scales up to the a1.4x large instance, which can deliver 16 vCPUs and 32 GB of RAM.
AWS has built the A1 on top of its Nitro System, which provides dedicated hardware and a lightweight hypervisor to maximize server system performance.
New AWS C5n Instance Type
AWS is also expanding the networking capacity of its C5 class of instance with the new C5n instance type. Like the A1 instances, the C5n is powered by the AWS Nitro system. Unlike the ARM-powered A1, the C5N is x86-based, running on Intel Skylake 3.0GHz processors with support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set.
The main feature attribute of the C5n is massive bandwidth capacity of up to 100 Gbps of network bandwidth on the c5n.18xlarge instances, which also provides 72 vCPUs and 192 MB of memory.
AWS Machine Learning Instances
AWS also announced new P3dn machine learning-optimized instances that benefit from a host of customized AWS silicon and software.
At the high-end, the new P3dn.24xl instances provide users with eight NVIDIA V100 GPUs, 32GB GPU memory, NVMe storage, 96 Intel Xeon Scalable processors vCPUs, and 100Gbps of networking capacity.
Coming in 2019, AWS announced that its new AWS Inferentia silicon will further accelerate the P3dn instance type. The Inferentia is a high-performance machine learning inference chip, custom designed by AWS. According to AWS, Inferentia provides hundreds of teraflops per chip and thousands of teraflops per Amazon EC2 instance.
Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.