While writing this article I found myself reflecting on the state of IT when I started at NetApp in 2012. The world was a different place. GCE and Azure were facing an uphill battle against incumbent AWS, we were talking about OpenStack for private cloud, and Siri was still in beta. However, even though 2012 is an anagram of 2021, I think we can all agree that the entire world has been fundamentally reshaped in ways we could never have envisioned back then.
Amid this never-ending churn, I found myself drawn to identifying and solving the biggest, baddest storage problems I could find. There’s no shortage of those, and the general availability of artificial intelligence presents the latest opportunity for companies to reinvent themselves and more efficiently use the goldmine of data they find themselves sitting on.
Having been well entrenched in helping solve traditional high-performance computing challenges around storage, I already knew that BeeGFS solves big data problems—but what about for large-scale analytics and AI specifically?
We can roughly divide organizations running "HPC workloads" into two broad categories:
Joe McCormick is a software engineer at NetApp with over ten years of experience in the IT industry. With nearly seven years at NetApp, Joe's current focus is developing high-performance computing solutions around E-Series. Joe is also a big proponent of automation, believing if you've done it once, why are you doing it again.