Member Data and the Ethics of AI

A business room with people implied inside March 27, 2025 By: Joy Davis, CAE and Richard E. Shermanski

AI is quickly transforming the way associations operate, raising ethical questions about member data, transparency, and accountability. As you develop AI policies in your organization, keep these key considerations in mind.

AI’s arrival is driving ethics conversations across society right now, in part because implementing AI means revisiting work processes at their most detailed level. Building AI solutions forces us to reexamine the guardrails we place around some tasks and why those boundaries are there.

Ethics are at the foundation of boundary setting in member and volunteer governance. Our social norms and expectations of one another — our ethics — help us protect our relationships with our members and volunteers.

And like most ethical issues, the discussion around AI is often a discussion about competing values: What is more important to our members, and how do we create policies and procedures to manage our use of AI in relation to that? What do we prioritize when we decide how we’re going to operate the organization?

When it comes to AI, our ethics should help determine which rules we put in place and what policies we create to manage our data and structure the member experience.

This article seeks to help you develop an effective AI policy by inviting you to consider ethical questions related to AI and prioritizing your values in a way that reinforces member and volunteer confidence in your organization.

Aligning With Member Expectations

What do your members expect you to do with data about their demographics, where they live, what content they access, and how they engage with your organization?

Since IBM’s Watson computer won Jeopardy in 2011, AI’s ability to derive unexpected insights from data has become clear. The decades of data we’ve collected about members can now power AI insights to enhance organizational efficiency and product development.

However, this raises critical questions about member expectations and consent. Organizations must consider:

  • How members understood their data would be used when they shared it
  • Data security measures
  • Communication about data-sharing practices
  • Partner data access policies
  • Opt-out options and mandatory data requirements

Associations must balance innovation with transparency and member trust through clear policies and open communication about data practices. Before you write your AI policy, ask yourself what your members may assume it to be, and how you’ll reset their expectations.

Building Clarity Around Partner Access

How will consultants use data they receive or gather from your organization, and does this align with member expectations?

Consultants and vendors are as eager as any association to use AI to reduce their costs and increase revenue. Members expect organizations to have vetted these partners and ensured the security of their data, and possibly the insight into member behavior and activities that may be drawn from it.

Make asking consultants and vendors for their AI policies a routine part of negotiations and check-in conversations. When new data-related services and programs are offered, ask questions about how the data is secured, managed, and analyzed.

What will the AI software do with this data?

Facebook built itself on the business model of encouraging users to join a platform at no cost, because the real value to the company was what it could do with data about who users know, who they interact with, what they post, and what they buy.

When you partner with companies that are managing vast amounts of data from various clients, it’s important to learn what they intend to do with that data. Will members’ information be added to a data pool the software provider is going to mine for their own purposes? What are those purposes?

Ensuring Your Members Don’t Compromise Data

How does AI's presence affect your volunteer and subject matter experts' contributions to organizational intelligence gathering?

AI-powered note-taking bots are everywhere right now, including in your members’ possession, as a number of ASAE members have discovered recently. With AI note-taking and -management tools becoming ubiquitous, organizations must establish clear confidentiality protocols for committee and board meetings. This includes guidelines for volunteers accessing sensitive organizational data on personal devices and the possibility that they may use AI tools to evaluate that data.

Key considerations include controlling note-taking software use, establishing explicit confidentiality expectations, protecting candid discussion environments, and monitoring member use of shared information. These policies protect organizational interests while maintaining productive collaboration.

Do members understand that committee spaces are confidential? Do they know they should help secure your data? These expectations will need to be clearly communicated to avoid ambiguity.

Balancing Business Transformation and Member Impact

Will AI force you to reassess your business model and, if so, how will that impact your members?

If your organization’s business model is pinned on being the only source of a particular type of information or on a profession that will largely be replaced by AI, you probably already know you need to begin changing your strategy.

As association executives, we have a unique ethics-derived perspective in these situations: We are responsible both for caring for our members and ensuring the organization’s future.

In these cases, leadership must weigh organizational survival against serving member segments whose roles may be diminishing. This decision involves balancing values related to members, the mission, and organizational goals. There are often no easy answers.

Once the board decides on a new direction for the organization, consider your ethical obligations to the members who may be affected by the new strategy. How will you explain this to them? What is their role in the new course the organization is taking?

How does the impact of AI intersect with the mission and goals of your members?

Vox reports that today’s AI is already using as much energy as a small country. Scientists are concerned that it’s damaging the scientific record with a tsunami of fake studies, among other problems. But it’s also got the potential to save you, or your competitors, money and time.

An association that represents members who are concerned about the climate or scientific research or another industry affected by AI must revisit its values and ask questions that prioritize them: How do members want the organization to operate in a world with AI? What happens if we use AI while fighting its impact?

Protecting Your Community

What are the expectations of your members and volunteers in how they treat each other through AI?

Deep fakes and similar tactics enabled by technology and media have joined the tool set available to anyone who is willing to pursue their cause through shady means. What if your members or partners use AI to take advantage of one another? How will you manage their behavior?

Associations may need clear policies addressing AI-enabled harassment, impersonation, or misrepresentation between members. Consider implementing guidelines for AI-generated content in member communications and establishing consequences for misuse.

Moving Forward With Intention and Integrity

As association leaders define the boundaries they’ll place around AI in their organizations, they must keep their ethics at the forefront. In balancing competing values between organization and member, and innovation and expectation, they can find the answers that will reinforce their relationship with their members in the face of great change.

Joy Davis, CAE

Joy Davis, CAE, Deputy Executive Director of the American Association of Pharmaceutical Scientists.

Richard E. Shermanski

Richard E. Shermanski, Ethics & Compliance Leader | Legal Ethics Expert | Professional Development Specialist