Improving data center power efficiency isn’t just a one-and-done quick fix. It requires a long-term commitment to a broad range of approaches. Some approaches, such as consolidation and virtualization, make a significant difference. In contrast, others, like allowing the data center temperature to be a degree or two higher, make more minor but still valuable improvements. Sometimes it’s the combination of several of these little modifications that sustainably improves your carbon footprint.
NetApp’s data center power efficiency approach reflects this broad range of techniques. Some of them come from our 30-year heritage in designing storage arrays when power-hungry hardware components like DRAM were far less efficient than they are now and far more expensive. From those deep roots and frugal beginnings, delivering more value with less hardware is still part of our engineering DNA.
This commitment may explain why NetApp was, for many years, the only vendor who could use RAID 6 for the highest-performance workloads. It’s also why NetApp led the way with flash acceleration and other storage efficiency techniques like data deduplication for primary storage—first on disk, then hybrid, and finally for all-flash and cloud environments.
It’s great to highlight the history of features that put NetApp at the forefront of power efficiency today. But it’s even better to show how we continue to improve and that each improvement, no matter how slight, is worth celebrating. So, it is my privilege to announce that NetApp has just begun shipping Titanium-rated power supplies for new controllers and shelves to selected customers and geographies.
This change in power supplies results in a modest but valuable improvement in the overall power consumption of a NetApp® array. The savings depend on the voltage input level and how much power the array is drawing at the time. For data centers that run 220 volts and above, the power savings can be an additional 4% on top of the optimizations that I will detail later in this blog. The extra 4% might not seem like a lot, but when you add up those improvements across an economy the size of Western Europe, the savings are considerable.
For example, the impact assessment outlining the power efficiency standards for servers and storage in the European Union estimated that in 2015, servers and storage had consumed approximately 2% of all power in Europe, or about 60 terawatt-hours. Improving that usage by just 4% would have the same effect as building a gigawatt of wind or solar capacity. This potential explains why these standards are being applied across the EU beginning in 2024. And NetApp is again leading the pack of storage vendors by making Titanium-rated power supplies available now.
Titanium-rated power supplies are shipping worldwide for NetApp high-end A900 and FAS9500 controllers. For the EU countries that must meet their compliance goals beginning in 2024, the current NetApp AFF and FAS controller lineup ships with Titanium-rated power supplies without any further action required. Outside of the EU, customers who need to upgrade from Platinum-rated power supplies can order Titanium-rated power supplies. NetApp will work with you to help identify your high-utilization systems where these power supply improvements will have the most impact.
These new power supplies are just part of a much larger efficiency story. NetApp has a long history of innovation, developing efficiency features and making them work together so that the whole is greater than the sum of its parts. Traditional approaches that companies like Dell use and that Pure copies depend on building separate silos for block and file or creating dedicated backup targets. In contrast, NetApp innovations work together to reduce power consumption throughout the entire infrastructure stack.
That bold claim deserves some proof, because it’s easy to be lured into a sense of false equivalence when some vendors make so many unfounded marketing claims. So, as some proof, I’ve put together a list of our software and hardware advancements and their sustainability benefits. I’ve tried to keep the following list of innovations focused on our all-flash products. But it’s still pretty long, so feel free to skip to the bottom line that shows how NetApp arrays consume between 30% and 43% less energy than the offerings of our competitors do.
If you’re interested in the details, you’ll notice that some of these innovations, like the ones that support consolidation and incremental scaling, can make a significant difference. In contrast, others, like the new Titanium-rated power supplies, are more incremental but, when taken together, have a big impact. To make it a little easier to read, I have split the lists first into software design developments, especially in the NetApp ONTAP® ecosystem, and second into advancements in hardware designs.
There’s even more that I would like to talk about, including monitoring; reporting; and a range of services that help you implement and manage your environmental, social, and governance (ESG) commitments. But this list is already long enough, so those benefits have to wait for another blog post.
First, let’s look at NetApp software design developments and their many benefits.
Feature | Benefit |
Unified block, file, and object | The foundation of storage hardware consolidation, this feature eliminates the inefficiencies of needing multiple hardware silos for structured and unstructured data. It increases resource sharing and decreases waste and complexity. |
Scale-out | This feature defers power utilization for what would otherwise be unused storage performance and capacity until it’s needed. |
FabricPool cloud tiering | Inactive data is natively and transparently stored outside the data center instead of keeping inactive data on expensive primary flash or disk inside the data center. |
NetApp Cloud Volumes | Entire workloads can easily be moved to and from AWS, Azure, and Google Cloud for disaster recovery (DR), cloud bursting, and analytics. It eliminates the need for bursting or idle DR capacity within the data center. |
Block-level backup and restore with storage efficiency | These features minimize the amount of data that must be stored or transferred over the network for data protection to create a recovery point.
|
Flexible caching | Distributed datasets require less hardware versus making full replicas or deploying high-bandwidth, low-latency WAN infrastructure. |
Dual and triple erasure coding | Customers get more usable data from less hardware than with mirroring or traditional RAID techniques. |
Advanced zone checksums for fixed 4K media | This feature reduces capacity overhead for error correction. |
Aggregate deduplication | More data can be safely stored in the same usable capacity. |
Temperature-sensitive compression | Compared with monolithic compression approaches, more data can be stored in the same usable capacity without affecting performance. |
Compaction | By significantly increasing the efficiency of data compression for specific datasets, including initialized but unused database regions, more data can be stored on the same hardware. |
NVMe over Fibre Channel (NVMe/FC) support, NFS remote direct memory access (RDMA) support, SMB RDMA support | These features significantly improve client CPU/GPU efficiency for high performance for OLTP-style small-block and AI training workloads versus non-RDMA protocols. |
NVMe over TCP (NVMe/TCP) support | Customers get most of the client benefits of a full NVMe offload with less networking hardware than by using dedicated FC or RDMA over Converged Ethernet (RoCE) switching. |
Continual improvements to software efficiency and functionality | Constant software improvement extends the lifetime and efficiency of all hardware assets versus hardware-refresh-centric approaches to upgrades. |
The following advancements in NetApp hardware design over the years continue to offer numerous benefits.
Feature | Benefit |
Active-active high-availability controllers | Customers get more performance from less hardware than with active-passive high-availability configurations. |
30+ years of refining metadata management and read-ahead algorithms in RAM-constrained environments | Memory requirements for large datasets are significantly reduced compared with approaches that require most metadata to be cached in DRAM for deduplication, and so on. |
Dedicated nonvolatile write logging with integrated battery backup and destage to stable media | These features offer higher performance, greater resilience, and lower power requirements than with typical battery-backed DRAM memory caches. |
High-density TLC NVMe media | Versus SAS-based SSDs, these media reduce per-gigabyte media power draw and controller CPU load for high-performance OLTP-style small-block workloads. |
Large-capacity QLC NVMe media | These media draw less power per gigabyte than smaller MLC and TLC SSDs do. |
Media integrated into the controller enclosure | This integration eliminates shelf controller power draw. |
This collection of hardware- and software-centric power-saving features may explain why NetApp all-flash storage arrays use far less power than our competition’s arrays do. Based on published vendor data, NetApp arrays consume between 30% and 43% less energy than comparable arrays from Pure consume. I’m using Pure as a point of comparison because that company makes so much noise in the market about its power efficiency, and it sets itself up as the standard to beat. As you can see from the following graph, NetApp’s biggest-selling controllers are easily more efficient.
I’ve written about Pure’s record of making unverifiable power efficiency claims, so I promise to delve into our data and the method behind this graphic in an upcoming blog post.
This advancement is by no means the end of NetApp’s power efficiency and sustainability journey. I feel confident that as time goes by, we will improve sustainability and efficiency for ourselves and for our customers. I believe that we will remain the partner of choice for our customers to meet their environmental sustainability, cost and power efficiency, and governance goals and requirements.
Compare all-flash arrays.
Investigate NetApp sustainability.
Ricky Martin leads NetApp’s global market strategy for its portfolio of hybrid cloud solutions, providing technology insights and market intelligence to trends that impact NetApp and its customers. With nearly 40 years of IT industry experience, Ricky joined NetApp as a systems engineer in 2006, and has served in various leadership roles in the NetApp APAC region including developing and advocating NetApp’s solutions for artificial intelligence, machine learning and large-scale data lakes.