Sign in to my dashboard Create an account
Menu

Embracing change: Establishing governance for enterprise use of GenAI

person wearing professional suit inside lawyer chamber
Contents

Share this page

Beth O'Callahan
Beth O'Callahan

“It’s not about standing still and becoming safe. If anybody wants to keep creating they have to be about change.”

Miles Davis

Looking back on 2023, we saw almost 200 pieces of proposed generative artificial intelligence (GenAI) legislation nationwide, the issuance of a 111-page Executive Order on AI, and the passage of the EU's landmark AI Act. In parallel, the shadow of Napster looms. Artists and writers are filing lawsuits asserting that using their creations to train GenAI models constitutes copyright infringement. Music groups are issuing takedown notices. Getty Images filed a lawsuit accusing the startup Stability AI of illegally copying more than 12 million Getty photographs, their captions, and their metadata. Getty claims that Stability AI has even replicated Getty Images' watermarks in the GenAI output. A total of approximately 35 AI-related IP cases were filed in 2023. Around the world, courts and administrative agencies, including the U.S. Copyright Office and the U.S. Patent office, are grappling with how to address novel authorship and inventorship questions. GenAI has led to remarkable developments in innovation and complex developments in legislation and litigation.

The legal landscape

If you happened to take time to blink at the end of 2023, the year was capped off by the New York Times (NYT) lawsuit against OpenAI and its investor (Microsoft). The lawsuit asks for monetary damages and a permanent injunction, and it also demands a court order compelling the destruction of all generative pre-trained transformers (GPTs) and large language models (LLMs) that incorporate NYT's intellectual property. Needless to say, the implications of a court order mandating the destruction of all GPTs and LLMs would be sweeping and potentially devastating.

If your role is in-house counsel, you are keenly aware of the dynamic legal landscape I describe. You are equally aware that all of your clients — technologists, innovators, professionals of all stripes, including legal colleagues — are looking to leverage GenAI in every aspect of their work. Everyone, and especially the CEO, sees this as an incredible opportunity to accelerate innovation, drive competitive advantage, and increase productivity.

With the explosion of legal activities and uncertainty surrounding GenAI, coupled with outsized demand for deployment of Gen AI throughout the enterprise, how do we advise our clients? If we focus on risk mitigation, on safety and security, then the advice would be “Wait.” Wait until we have clarity from all of the legislatures and courts around the world? Wait until the U.S. Congress acts? Wait until the U.S. Supreme Court solves our problems? Remember that it took the Supreme Court almost 10 years to decide on the copyrightability of certain areas of application programming interfaces (APIs). Ten years

If we solve entirely for risk mitigation, for certainty and clarity, if we wait, we will not just fall behind, we will fail. We will fail at our jobs of solving problems, of strategic counseling, of enabling the enterprise. There is no question that we need to proceed judiciously: “Don’t Napster NetApp” is my refrain. But at the same time, we need to stay nimble, pivot and adjust, focus on mitigating the biggest risks, and also focus on enabling the business to move and respond rapidly. Otherwise, we will miss a unique opportunity to leverage the most transformative technological development since the launch of the World Wide Web into the public domain. Like the World Wide Web, embracing GenAI by individuals and enterprise is inevitable. It’s already here.

Into the unknown

GenAI comes to all of us with the good, the bad, and the unknown. We know the good. GenAI models are trained by a massive amount of data. ChatGPT 3 is trained by about 570 gigabytes of text and 175 billion parameters. We have all seen its impressive results. We also know the bad. We chuckled when we read about judges weighing sanctions against lawyers who cited fake case laws in legal briefs due to AI "hallucinations." On a more serious note, unlicensed and copyleft computer code has been found in GenAI output. If such output makes its way into commercial revenue-generating products, it may subject such products to injunction or force companies to open source their proprietary code. The rest lies in the unknown. No one knows how copyright laws will be applied to GenAI models and output. No one knows the amount of human intervention required to make GenAI output copyrightable. No one knows where the technology will take us.

With so many unknowns, we must still embrace GenAI. However, we must be thoughtful and deliberate in doing so. It's more important than ever that in-house lawyers fully understand the products and the business when we adopt GenAI. We need to understand the risk profiles of various use cases. Of course, protecting the ability of the enterprise to continue to offer products and services must be a priority in assessing GenAI adoption. Therefore, a cautious approach is warranted when incorporating GenAI models or output into products and services.

The role of in-house counsel

For these reasons, enterprise adoption of Gen AI is an area where in-house counsel can and should play a key role not just in risk mitigation, but in business enablement as well. First and foremost, we should drive collaboration with and among other executives to establish the appropriate governance of the oversight, management, and use of GenAI across the company. We should counsel the business that the scope of adoption of GenAI should depend on the use cases, and that we can help the enterprise make smart decisions about where and how to invest. Developers and other users who need to use GenAI tools in their work will need guidance on the licensing of the LLMs and on the data used in training and tuning the LLMs. GenAI tools trained on "licensed content" or "lawful content" are emerging. There is even speculation that OpenAI may be in talks to license content from publishers. At NetApp, we are partnering with engineering, IT, security, and other key stakeholders to navigate these issues and to establish guidelines and governance for the safe and responsible use of our enterprise data with LLMs.

No doubt the GenAI world will continue to evolve while the legal landscape remains uncertain. In-house counsel must be agile and stay up to date on the latest developments to guide the enterprise through this uncertainty. We know that technical and legal developments will test our agility in embracing GenAI technological breakthroughs. But as Miles Davis said, to enable the enterprise to keep creating, to keep innovating, we have to be about change.

Learn more

To learn about what IDC has written regarding policy-driven governance and security for AI workflows, click here to access the brief

For additional NetApp executive perspectives on AI and GenAI, read NetApp intelligence: Equipping you with insight, advice, and tools.

Beth O'Callahan

Elizabeth (Beth) O’Callahan is NetApp’s Chief Legal Officer, overseeing the company’s legal, policy, compliance, ESG, government relations and regulatory affairs. Elizabeth also serves as NetApp’s corporate secretary and chief compliance officer. Throughout her career, Elizabeth has advised leading technology companies on a variety of matters, including corporate governance, securities law, mergers and acquisitions, capital markets transactions, corporate compliance and ethics, data privacy, intellectual property, and litigation. Elizabeth holds a bachelor’s degree from the University of California at Los Angeles and a J.D. from Santa Clara University.

View all Posts by Beth O'Callahan

Next Steps

Drift chat loading