Case Study: Developing an AI Editorial Policy

A robot writing illustration September 23, 2024 By: Cassie LaJeunesse

Are you developing an AI editorial policy? This is how one association management company did it in a way that worked for their needs.

Like any new development in technology, generative artificial intelligence (AI) has its supporters and its detractors. There are valid points on both sides: AI can be used to streamline workflows and increase efficiency, but it can also open your organization up to misinformation and ethical issues. That’s why it’s important to be proactive. If your association management company generates content in any way—whether for publications, websites, or social media—it is probably a good idea to put a policy in place detailing acceptable uses of generative AI tools.

What Is Generative AI?

Generative AI is essentially the use of computer algorithms to generate or modify content in a way that mimics human creation. Generative AI can be used to develop text, images, video, and audio. Large language models (LLMs), such as ChatGPT and Claude, are a type of generative AI that can read massive amounts of text and identify patterns, which they then use to generate new text that follows those patterns.

Do Your Research

As the editor and publications specialist at Capital Association Management (CAM), I was tasked with developing our AI editorial policy. With AI being such a hot topic of conversation, especially among creators and editors, I found many resources to help me with this process. I have included them in the resource list below.

Two online courses were especially helpful: “AI Content Policy for Editors” from Frushour A.I. Consulting & Editorial and “AI in the Newsroom: The Ethical Approach” from the Thomson Foundation. Both courses provide real-world examples of AI use, both positive and negative, as well as editorial policies develoåped by other organizations. Other research included reading up on AI as it relates to copyright law and referencing organizations like the Authors Guild and Poynter to learn more about how other writers and editors are approaching the AI conversation.

Personalize Your Policy

Opening up the conversation about AI in your organization can be overwhelming—everyone has different opinions. To get started, I wanted to understand CAM staff’s feelings about AI. I set up a simple survey to find out how my coworkers use AI and how they might want to see it used (or not used) in the future. After the overload of information I found in my research, this was helpful in tailoring an AI policy that fits our organization. I made sure that the policy would not be too restrictive to beneficial uses of AI, but that it also addressed the safe and ethical use of AI tools.

In my opinion, one of the most important pieces of our policy is a note that explains that the policy itself is subject to change. Artificial intelligence is changing rapidly, so our policies cannot be static. Our policy asks staff to monitor any major changes in AI tools that they use regularly and inform leadership so that we can revisit the policy if need be.

Develop Guidelines

Throughout my research, I accumulated a list of ethical guidelines that helped inform our policies. These aren’t concrete rules so much as guiding principles to consider when using AI tools in our work. For our policies, those guidelines are:

  • AI should complement, not replace. AI should never be responsible for the primary development of content.
  • Human contribution and oversight are imperative. Editorial responsibility must remain with the humans on our team. Be aware of inaccuracies and biases in AI output.
  • Copyright and other rights must be respected. Keep ethical, legal use of content in mind. Many LLMs draw from copyrighted content, and the authors of that content have not always consented to its use for AI purposes. Privacy must also be considered—do not upload any confidential information or data into open-source AI systems.
  • Transparency is key. We will use AI in some situations, and that is great! However, when we do so, we should be transparent with any involved stakeholders about the use of AI in content development and creation.

Make Your Policy Specific

The above guidelines are just that—guidance on how to use AI ethically. For the official policy, I tried to be as specific as possible in what CAM would and would not allow pertaining to AI use. Our final policy consists of a list of acceptable AI uses, including research, brainstorming, and revision, and a list of unacceptable uses, including verbatim content creation and any content analysis that involves intellectual property or private information. I included separate language detailing our stance on AI and copyright: We promise not to upload any proprietary content belonging to our staff or clients into AI tools for the purpose of generating content, and we prohibit the use of any of our content in training AI.

The most important thing to remember when developing an AI editorial policy is that it’s not a one-size-fits-all process. The AI conversation can be overwhelming, but it’s nothing to be afraid of if you are proactive and stay informed. Use multiple sources to help formulate your policy, but make sure it fits your organization and your specific needs. Work with your team to create a policy that works for all of you.

Resources

Cassie LaJeunesse

Cassie LaJeunesse is the editor and publications specialist at Capital Association Management.