listing page or single post https://www.hellersearch.com/blog Heller Blog

Mitigating the Legal Risks of AI Systems

Bob Scheier
By Bob Scheier

Feb 19, 2025

Charles Nerko, a data security litigation expert at Barclay Damon, on the risk management and governance approaches companies can take when adopting new AI applications.

Artificial intelligence systems may not be human, but Charles Nerko, an authority on AI and technology risk management, says that companies should think about supervising them as if they were.

Nerko leads the data security litigation team at Barclay Damon LLP and advises businesses on AI and outsourced technology arrangements and liability. In this interview, he discusses how adopting AI systems can bring new risks to businesses and unintended consequences if they are not managed properly.

Nerko explains the overarching legal issues about which C-suite executives should be aware, including the responsibilities companies take on when they adopt these systems; how they can establish policies governing their use to protect a company’s assets and reputation; and how to approach agreements with vendors that provide AI systems.

Bob Scheier: What are the main legal concerns with AI?

Nerko Charles 216Charles Nerko: Imagine handing over critical business decisions to a mysterious black box that occasionally spits out wrong answers. That’s essentially what companies are doing with AI. These models operate in ways that are not easily understood or traceable. This lack of clarity raises concerns about liability when AI errors lead to unintended consequences.

Unauthorized use of copyrighted or confidential information to train AI models can lead to claims of copyright infringement or misappropriation. Utilizing AI systems to process personal data may violate privacy laws. AI tools like customer service chatbots might produce inaccurate or defamatory content, resulting in legal claims for consumer deception or libel. Decisions made by AI can be biased or discriminatory.

What laws apply to AI? 

The laws that govern AI are currently a patchwork of AI-specific regulations and common-law principles. While certain jurisdictions, like New York City, have specific rules governing AI use in areas such as hiring, there isn’t a unified or comprehensive set of laws.

In the current political climate, individual states, rather than the federal government, will continue to take the lead in promulgating AI regulations. States can pass state-specific laws (which can apply to companies that don’t even have a physical presence in the state), as well as regulate AI through common law.

What is common law and how does it work?

Common law is the flexible set of default legal principles that apply in the absence of formal legislation. It relies on previous court rulings to apply the law in new contexts, like how legal concepts of tangible property rights shaped the law of the Internet. Common law extends to AI by holding companies liable for their use of AI systems much as they would be liable for the actions of their employees. If an AI system causes harm while deployed by your company, your company can be held responsible.

What does all this mean for CIOs and other business decision makers?

The bottom line is that companies are expected to supervise their AI systems just as they would their human workforce. Even if you acquired the AI tool from a vendor, or an employee independently uses a consumer-grade AI tool without company authorization, your business bears primary legal responsibility for any issues that arise. It’s therefore imperative that your organization knows how AI is being deployed across the enterprise, so that your AI use undergoes proper review.

At the very least, I recommend companies have a written AI policy that they revise at least annually, as well as whenever AI technology, usage, or laws change. This is essential for mitigating the liability, reputational, and security risks associated with problematic employee use of AI tools.

Without clear guidance, employees might independently use consumer-grade AI tools, or enterprise AI tools in liability-creating ways. Consumer-grade AI tools lack proper security and data retention measures, potentially exposing sensitive business data or violating privacy regulations. Even enterprise-grade AI tools can be misused, such as making hiring or financial decisions in a biased manner.

What should a corporate AI policy include?

It should define what AI tools can be used—for example, no use of consumer-grade AI—and their acceptable uses. It might, for example, list the AI models and types of content that can be used to produce content shared with the outside world, while allowing more freedom for experimentation with AI to drive innovation as long as the outputs undergo human review.

These policies should also cover how the organization assures the legality and quality of the data used to train their AI models, how to guard against the use of infringing data, the human oversight needed to assure the quality and fairness of AI output, and how to inform customers and business partners about the role AI played in generating content or making a decision.

AI policies should also be informed by best practices recommended by regulators. For example, the Department of Labor recommends that employers not solely rely on AI and automated systems to make significant employment decisions, such as whether an employee should be hired, fired, or disciplined. Whenever a company uses an AI tool to assist with these decisions, the company should document its procedures for human review and evaluation. 

How should business approach contracts with AI vendors?

Contracts are an important risk management tool, especially when problems with AI can place millions of dollars in legal liability and your company’s reputation at stake.

First, the contract should clearly outline the expected performance standards of the AI system and establish remedies if those standards are not met. This may include assurances about the provenance and safety of the dataset used to train the AI model.

Second, the contract should detail how any confidential or proprietary information will be used, including prohibiting your company’s data from being used for AI training without authorization.

Finally, try to seek an indemnification from the vendor so the vendor shoulders the costs and legal risks associated with the AI’s actions, including issues arising from outputs and training data that are beyond your company’s control. For example, Adobe advertises that Adobe Firefly was trained on content licensed to Adobe or that is in the public domain. Adobe also offers enterprise plans that protect its customers if a customer receives a claim that an output infringes on a creator’s intellectual property rights. 

 

What’s indemnification and why is it so important in contracts with AI vendors?

Indemnification is like a prenup—it specifies who gets stuck with the tab when things go south. In AI contracts, indemnification clauses spell out who must cover the legal expenses if the AI system’s actions lead to legal claims. For example, if an AI system generates content that infringes on someone’s copyright, an indemnity clause might shift the legal liability for that lawsuit to the AI provider. Without it, your company could be held financially responsible for copyright infringement, even though you didn’t develop the AI system.

Even when you have an indemnity, it’s important to understand the exceptions. For example, Adobe Freely’s indemnity does not apply if a customer modifies the output, even with another Adobe tool. Even slight modifications—like cropping the image—can void the indemnity. This is another reason thoughtful AI governance and collaboration with your legal team is so important. 

How can business leaders proactively address these legal challenges?

Involve your legal team from the outset to navigate emerging compliance issues, review vendor contracts, and develop your company’s AI policy. Don’t wait until the AI system has gone rogue to consult your counsel. 

It’s also crucial to work with an attorney who understands your business and its priorities. Look for an attorney who understands technology and can help you manage risks intelligently and strategically—without stymying innovation. Questions to ask your lawyer include:

  • What laws apply to our use of AI, including in states where we don’t have a physical office but may interact with consumers?
  • How should we address AI in vendor contracts?
  • What information should we maintain to demonstrate the defensibility and compliance of our AI systems?
  • Is our AI policy appropriately mitigating legal risks and incorporating best practices?
  • What types of insurance coverage should we consider to guard against AI liabilities?
  • Are there reputational risks, like potential erosion of consumer or employee trust, that we need to consider in how we deploy and communicate about AI?
  • What employee training do you recommend to ensure understanding of our AI policies and controls?

Finally, as AI laws continue to evolve, it’s important to maintain an ongoing collaboration with your legal team and to review and update your AI policy periodically. With President Trump kicking off his new term by revoking President Biden’s executive order on AI risks, I expect individual states to take the lead on issuing new laws to restrict how AI is used in employment decisions as well as when interacting with consumers and children. Your attorneys should be kept apprised of changes in how the company uses AI to ensure its ongoing use remains compliant and defendable. As AI regulations are a new and frequently changing area of the law, we’re increasingly seeing corporate legal departments turn to outside counsel to buttress the in-house legal expertise.

Any final words of wisdom?

Think of AI as a really smart intern: capable but still needing supervision. Proactive management will help you harness AI’s potential while reducing unintended and costly legal consequences. 

Bob Scheier

Written by Bob Scheier

Bob Scheier is a veteran IT trade journalist and IT marketing writer.