Sign in to my dashboard Create an account
Menu

Private RAG - Unlocking Generative AI for the Enterprise

retrieval augmented generation flow diagram
Contents

Share this page

Mike Oglesby
Mike Oglesby
1,928 views

The release of ChatGPT in late 2022 opened the world’s eyes to the potential of generative AI. Large language models (LLMs) and chatbots can help your organization turbocharge productivity and unlock innovation. Retrieval augmented generation, also known as RAG, is a particularly interesting use case for enterprises like yours. RAG enables your enterprise to augment pretrained LLMs with your company’s private data to generate results that incorporate proprietary information from within your organization. For example, a chatbot that has access to your internal HR documents can help your employees with HR-related inquiries. And you can use an LLM that has access to your proprietary code to generate documentation. And so on.

At NetApp, we have adopted RAG for more efficient and relevant responses to documentation queries.

retrieval augmented generation flow diagram

You can deploy RAG and maintain data security and governance

Unfortunately, enterprises have struggled to take advantage of these latest advancements due to very real concerns about data privacy and data governance. How does your enterprise prevent any leakage of sensitive information? How can your organization expose data to LLMs while maintaining data access controls and permissions across the data pipeline? At NetApp, we are in the business of helping you mitigate these concerns.

Secure your data on the move

NetApp’s industry-leading data movement technologies can help you quickly and efficiently move data to where you need it, while preserving access controls and file system permissions with enterprise-caliber governance and logging. For example, you can use NetApp FlexCache® technology to cache on-premises data in the cloud, where that data can be consumed by a cloud-based generative AI service.

nvidia high performance tier flex cache diagram

If your internal data can’t be exposed to the cloud because of regulations or governance concerns, you can replicate the data or cache it to an on-premises high-performance AI compute platform instead. There it can be consumed by an LLM-based application that is completely under your organization’s control. We are exploring the use of these same data movement and caching technologies to efficiently keep up to date the vector stores that power LLMs.

Protect your organization’s internal data

You can also use NetApp data management technologies to protect the internal data that you’re using for RAG, no matter what the data format is. NetApp Snapshot™ technology enables the near-instant creation of space-efficient backup versions of vector stores and databases. So if your data ever becomes corrupted or compromised, it can be quickly and easily rolled back to a previous version. And with NetApp FlexClone® technology, you can quickly and efficiently clone data volumes for development and testing (DevTest) workflows. NetApp BlueXP classification can even scan the data to confirm that it doesn’t contain any personally identifiable information (PII) or other sensitive information that shouldn’t be exposed to an LLM. If your dataset does contain sensitive data, you can use BlueXP classification to rapidly create a copy that doesn’t include the offending files.

Netapp snapshot copies and netapp flexclone volumes

Create opportunity with NetApp intelligent data infrastructure for RAG

To adopt RAG use cases, your organization needs an enterprise-caliber intelligent data infrastructure that spans clouds and data centers, preserves governance, and promotes sustainability—and that’s what NetApp offers. At NetApp INSIGHT® 2023 in Las Vegas, we demonstrated NetApp powered RAG use cases in both Google Cloud and Amazon Web Services (AWS). And we’re currently validating an on-premises RAG architecture that uses popular open-source technology. We have also adopted RAG internally for documentation queries.

With NetApp’s intelligent data infrastructure, you can turn a world of disruption into opportunities. Explore more AI solutions from NetApp.

Mike Oglesby

Mike is a Technical Marketing Engineer at NetApp focused on MLOps and Data Pipeline solutions. He architects and validates full-stack AI/ML/DL data and experiment management solutions that span a hybrid cloud. Mike has a DevOps background and a strong knowledge of DevOps processes and tools. Prior to joining NetApp, Mike worked on a line of business application development team at a large global financial services company. Outside of work, Mike loves to travel. One of his passions is experiencing other places and cultures through their food.

View all Posts by Mike Oglesby

Next Steps

Drift chat loading