Developing Policies To Govern AI

The real meats July 2, 2024 By: Ann Link, CAE

Associations should consider protections not only for their own content but for the risks associated with using AI to generate new content. ​​​

Artificial intelligence is ubiquitous, but a recent survey conducted by the ASAE Communication Professionals Advisory Council found that only a quarter of respondents reported having a formal AI policy for their organization—most indicated a lack of understanding the technology as the reason.

Large language models like GPT-4, the technology behind ChatGPT, rely on vast amounts of text data to generate responses and content by learning language patterns and structures present in the text. This text data is typically sourced from publicly available datasets, such as the open content available on organizations’ websites or information directly input by users.

This presents a new range of legal and reputational challenges. By developing an AI policy, an association can protect itself and help its members navigate AI.

Start By Protecting Your Copyrighted Content

While the association’s text from books, academic papers, research articles, and other published materials may be under copyright restrictions, disreputable players may not always play by the rules and consider any content to be fair game.

AI technology may create new risks for copyright owners; therefore, associations should protect their copyrighted content from AI as they would from any other kind of threat of copyright infringement, said Dorothy Deng, partner at Whiteford, Taylor & Preston LLP. “Register your high-value content with the U.S. Copyright Office, whether that be a book, guidelines or a video course. Just because it is behind a paywall does not mean it is legally protected.” (An earlier article provides practical tips on how to protect your copyrighted material.)

The Risks of Using AI Tools and AI-Generated Content

In addition to the concerns about your organization’s original content, there are legal risks involved with using AI to generate content.

The potential for copyright infringement. AI-generated content could substantially resemble copyrighted content from another organization and inadvertently open your association to a copyright infringement claim.

AI-generated content is not copyrightable. Under current copyright law, only humans can create works of authorship. The U.S. Copyright Office will refuse to register an organization’s AI-generated content—so be aware that it can be reused freely.

Providing false information. The information provided by AI tools may not be accurate. Deng recommends that associations consistently check the accuracy of content generated by AI. Providing false information can ruin your organization’s reputation. Potential for data breaches. When users input content into generative AI tools, even paid versions of applications, that content or information could “exit” the organization. Make sure you closely read the user terms and conditions which describe how the models use content entered into the application. The AI tool should have clear policies stating user ownership of data and should have end-to-end encryption to protect data privacy, including the end-user’s personal data.

Phishing scam. Safety is another key element for an association. “Be sure AI tools are reputable and properly safeguarded,” Cathy Lada, D.Sc., CAE, warns, “because some tools are just phishing” (i.e., communication scams meant to deceive people into providing sensitive information or downloading malware).

What to Include in an AI Policy

Given the potential risks associated with the use of AI and the number of people experimenting with it in the work environment, associations should consider developing a policy on AI. Any policy should include sections on copyright, fact-checking, and data security at a minimum.

“If strong organizational policies are already in place, the AI policy need not be lengthy,” Deng says. But an AI policy should not conflict with any other existing association policy. For example, if you have a confidentiality policy in the employee handbook, then employees are bound by the same confidentiality terms in connection with their use of generative AI tools at work.

Generally, financial or human resource information, member information, copyrighted material, and executive committee transcripts are barred from use in AI tools to prevent confidential information from becoming publicly available and causing possible legal action against an association. Policies often explicitly list any documents or data that should not be used in an AI tool.

Additionally, AI policies commonly address transparency, by requiring that content generated by AI is labeled as such, and equity, by requiring that the datasets and outputs are not biased.

Some organizations with stricter policies include: a requirement for formal permission to use AI tools; the conditions under which permission is granted, such as a solid data source and accurate and unbiased outputs; and the consequences for violating the policy.

Lada, who has done extensive research on the practical applications of AI, noted that with available templates on the internet, it shouldn’t take more than two hours to develop a policy document.

Educate Members on AI and Content Protection

Associations play a vital role in educating their members on the importance of content protection. Strategies for member education may include:

  • Sharing the organization’s AI policy with the members
  • Providing resources and guidelines for creating and sharing content responsibly
  • Hosting webinars or workshops on risks of AI and ways to protect copyrighted material
  • Encouraging members to report instances of unauthorized content usage or AI manipulation

As AI continues to reshape the digital landscape, associations must adapt to protect their valuable content assets. By implementing policies, educating staff and members, and staying vigilant against emerging threats, associations can navigate the evolving AI landscape with confidence.

Lada reminds us that AI can introduce significant efficiencies and there are numerous safe options available. Rather than fearing AI, associations should embrace innovation and lead the way in establishing ethical standards for content usage.

Ann Link, CAE

Ann Link, CAE, is a former publisher and current chief operations officer for the National Association of Regional Councils. She is a member ASAE's Communication Professionals Advisory Council.