NVIDIA's GPU Technology Conference (GTC), the premier artificial intelligence (AI) and deep learning conference of the year for developers, is coming up March 17–21 in San Jose. NetApp is a Platinum Sponsor of the event. Be sure to stop by NetApp booth #917 to see our expanding ecosystem of AI hardware and software solutions and get acquainted with our team and AI partners.
From autonomous vehicles to precision medicine to financial services, the pace and momentum of AI continues to increase in 2019, as enterprises are faced with the challenge of moving machine learning and deep learning projects from pilot to production. NetApp helps you accelerate AI infrastructure and remove bottlenecks to make these critical transitions as successful and painless as possible. With the NetApp® Data Fabric, the right data is always available at the right place and at the right cost.
AI success requires an ecosystem of effective hardware and software solutions. NVIDIA and NetApp are working closely to bridge the gap between the CPU and graphics processing unit (GPU) universes and to better address a wide range of machine learning, deep learning, training, and inference needs. Our NetApp ONTAP® AI system brings together the strengths of two industry leaders - NVIDIA DGX systems - the essential tool of AI research and development, and NetApp’s data pipeline expertise.
The two companies have strong alignment from the executive level to engineering to sales. We are partnering to deliver complete solutions that address data management and GPU acceleration both on premises and in the cloud. Together, we help customers address their biggest AI challenges.
A rapidly growing ecosystem of the most innovative AI solution partners is working with us to help address your data science, machine learning, and deep learning needs from pilot to production and from edge to core to cloud—to colocation.
By combining cloud-connected flash with the best supercomputing solutions, NetApp accelerates AI outcomes at any scale. At the NetApp booth this year, we’re featuring an ONTAP AI solution combining three NVIDIA DGX-2 systems with NetApp AFF A800 cloud-connected flash storage. A single DGX-2 delivers the AI computing power of over 300 CPU servers. We are currently benchmarking these DGX-2 systems with the NetApp Customer Proof of Concept (CPOC) lab and Arrow Electronics, a NetApp partner. We plan to share the results of these real-world AI benchmarks at the show. We expect the benchmark results to be eye-opening and quite possibly record-breaking, extending the range of what’s possible with AI and high-performance computing (HPC).
In addition to DGX-2 benchmarks, we’ll be publishing results for a 7-node DGX-1 pod with NVIDIA and Groupware Technology.
One of the challenges that comes with scaling AI projects is that not every enterprise has the data center facilities to support the latest high-performance AI computing infrastructure. Many of you are already discovering that you need new deployment options to support the latest equipment. NetApp is demonstrating its DGX-2 ONTAP AI system in a modular, Dynamic Density Control (DDC) liquid-air cooled cabinet from ScaleMatrix. This cabinet combines the efficiency of water with the flexibility of air, cooling up to 52kW of power load in a single 45U cabinet. These cabinets can be deployed in nearly any environment, and they provide clean-room quality environmental control, guaranteed air flow, and integrated security and fire suppression.
ScaleMatrix, a provider of high-density colocation and high-performance cloud services, has four data centers serving the United States that incorporate this cabinet technology to offer the highest power density available. ScaleMatrix is part of NVIDIA’s DGX-Ready Data Center program, giving customers access to data center services through a network of colocation partners. Colocation solutions provide more flexible deployment options and faster time-to-market to meet your AI needs.
Read the ScaleMatrix GTC blog here.
Also featured at the NetApp booth, in partnership with Cisco and NVIDIA, is FlexPod AI. The FlexPod® Datacenter for AI solution optimizes converged infrastructure for AI and other analytics workloads. It includes Cisco UCS blade and rack servers, Cisco Nexus 9000 Series switches, Cisco UCS 6000 Series Fabric Interconnects, and NetApp AFF A800 flash storage arrays. Featuring NVIDIA V100 Tensor Core GPUs and powered by the NVIDIA GPU Cloud (NGC) Software Stack, FlexPod AI is an ideal GPU-enabled solution for current FlexPod customers and existing Cisco UCS shops. The Cisco Verified Design (CVD) for FlexPod AI was just released. Stop by to explore FlexPod AI technology in depth using a Kaon touchscreen demo.
No single deployment option is right for every need. That’s why we offer solutions to cover a wide range of requirements. NetApp partner H2O.ai will be in the NetApp booth to highlight machine learning in the cloud. H2O.ai is the creator of the H2O open-source machine learning platform, trusted by hundreds of thousands of data scientists in over 18,000 enterprises globally.
The right AI hardware and software partners are essential for achieving your goals. We created the NetApp AI Partner Network to foster an ecosystem of solutions and expertise that extends and enhances your success with NetApp technology. This network includes a growing number of consulting, channel, cloud, software, hardware, and colocation partners.
“By coupling NetApp storage solutions for deep learning with allegro.ai's end-to-end deep learning lifecycle management platform, customers get an optimized solution for their data management from the physical layer up to the application layer. This means they can focus on the science rather than the tool chaining, allowing them to win the race by delivering higher-quality products, faster and more cost-effectively.”
—Nir Bar-lev, CEO and Co-Founder, Allegro.ai
“ScaleMatrix data centers and our DDC cabinet technology portfolio were designed to make deploying any hardware—at any density—easier. Our partnership with NetApp and NVIDIA does just that. We’ve combined efforts to create integrated technology platforms that help eliminate data center limitations, speed solution delivery, and simplify and accelerate the deployment of intense AI workloads.”
—Chris Orlando, CEO of ScaleMatrix and Co-Founder of DDC Cabinet Technology
“Parabricks is excited to collaborate with NetApp on solving the computing and data management challenges of the genomics industry. Genome sequencing currently produces 100 petabytes of digital information per year. Considering this rate, there is a need for computing, storing, and data management to move ahead of the upcoming challenges in genomics. By combining our rapid computing solution with NetApp ONTAP AI, we are addressing this challenge.”
—Mehrzad Samadi, Co-Founder and CEO, Parabricks
Several featured partners will be demonstrating their latest innovations at the NetApp booth this year:
We’ll be featuring a range of additional hardware and software solutions in our booth this year, including:
Come by to check them out along with the rest of our featured hardware and demonstrations. We’ll be giving away ONTAP AI swag including power banks and wireless chargers.
NetApp senior technical director Santosh Rao is leading NetApp’s speaking session this year. Attend this session for your chance to win a JBL Pulse 3 portable Bluetooth speaker.
The session is titled “Architecture Considerations for Federating ML and DL Data Pipelines across Edge, Core, and Cloud” and will be happening on Wednesday, March 20, at 9:00 a.m., Marriott Ballroom 2.
Guest Presenters:
Meet with a NetApp executive or technical expert in a collaborative meeting customized to your needs. Possible discussion topics:
Stop by for a ticket to the NetApp customer reception at the Tanq Bar on Tuesday, March 19, from 7 p.m. to 9 p.m. It’s time to kick back with NetApp data experts, with drinks and food on us.
See you in San Jose!
Octavian Tanase is the Senior Vice President of Engineering for NetApp’s Hybrid Cloud Group, which enables seamless cloud experiences on-premises and provides core foundational technologies to NetApp’s public cloud capabilities.
In his current role, Octavian is responsible for driving product innovation, measurable business outcomes, and customer engagement and success with NetApp’s on-premises portfolio of products, including ONTAP, the industry’s leading enterprise data management software for shared environments.
Prior to his leadership of the Hybrid Cloud Engineering group, Octavian was responsible for several of NetApp’s award-winning products including, AltaVault, SnapMirror, MetroCluster, and SnapVault.
Before joining NetApp in 2010, Octavian led the Java Platform engineering group at Sun Microsystems/Oracle. He has also held various engineering roles in several start-ups in Silicon Valley.
In addition, as an active champion of Diversity, Inclusion and Belonging (DIB), Octavian is passionate about ensuring that his colleagues feel heard, represented and understood, and was responsible for introducing the first Unconscious Bias Training at NetApp. He plays a key role in various DIB programs at NetApp, including serving as a sponsor of Advancing Minority Interests in Engineering, previously serving as the Executive co-sponsor of NetApp Global Diversity, Inclusion and Belonging, among other initiatives.
Outside of work and when not spending time with his family, Octavian enjoys skiing, basketball and judo. He is also an active advisor for BackBox, a network automation startup.
Octavian holds a bachelor’s degree in Applied Mathematics from the University of California, Berkeley and Executive MBA from Stanford.