The objectives of the storage automation program were to decrease time to market for requests, improve quality, and build reputation by leading IT in automation capabilities. Johnson shares his storage automation experiences here as part of a series to help other professionals avoid pitfalls and seize opportunities to deliver value to their organizations.
Professionally, I lead teams and organizations in agile infrastructure design and management. Personally, I am a dog lover and trainer. I enjoy working with dogs as they mature from puppies to lifelong companions. Teaching a puppy to become a loyal member of your pack is about setting expectations, leading by example, and being committed to a long-term outcome. I mean this as a sincere compliment—leading projects and programs for infrastructure teams and leading dogs are similar in many ways.
I’m proud of my team’s journey to automating all things storage. Like our three dogs, this program started with us not knowing how it would turn out. We decided to run an experiment and discover where it would take us.
Automation for storage is no longer something that’s nice to have; it’s imperative if a storage team wants to remain relevant in a cloud-first, innovative landscape. Responding to the pace of business, anticipating emerging technologies, and delivering with quality—all these are mandatory today.
In this series of blog posts, I’ll share the lessons learned from our program, the success we’ve had, and how an automation initiative in support of DevOps and containers positioned us as leaders in our organization.
Before the engineers start creating playbooks and recipes, we establish the objectives we want to achieve. Here are a few objectives that are probably familiar to you:
In my experience, addressing these objectives requires thinking about collaboration and the tools you need to start on the path.
Simply put, how do we communicate, and how do we work together?
Are other teams aware of your undertaking? Do you have their support and time as required? Are the tools in place, or do you need to make a case for bringing them on board?
Also, in my experience, you need to understand where you are on the path, how you will display progress, and what criteria determine success. How will you know when you’re finished? How will you capture the baseline of your objectives? It’s suitable to use subjective metrics if you don’t currently measure things, but be sure to put processes in place to capture the critical measures of success.
In 2018, I was working with a team of 6 people to provide storage for IT, with approximately 7PB of storage usage. We were in scramble mode with the looming pressure of figuring out solutions for persistent container storage. The manual work was overwhelming the team, and we knew that we needed to do something different. We concluded that the only way to scale was to automate our storage platform to gain efficiency. The road to storage automation was coming into view.
My team had three main objectives:
We worked with our partners to standardize the company’s data protection needs as determined by the criticality of the application. Then we automated data protection configuration as part of the provisioning process.
In upcoming posts, I will detail how we tackled these objectives. I’ll talk about the metrics and the successes. I’ll also describe some of the challenges we faced along the way. I hope you find this series informative, and that it will inspire or help you with your storage automation projects.
To learn more, check out this video that we made at NetApp INSIGHT® 2021, Build Storage as a Service (STaaS) with Micro-Services.
Tony Johnson is the SRE Automation Lead for IBM. He is responsible for the automation of hybrid cloud services and platforms for the IBM CIO team. Previously he was Storage Manager at Red Hat, from 2014 to 2020, leading a group of 6 engineers responsible for IT storage.