Ethics Thought Leadership

Mastering the Machine: Creating an AI Policy for Your Organization

AI policy
Share this!

For further reading on this topic: PRSA released a comprehensive framework titled “Promise and Pitfalls: The Ethical Use of AI for Public Relations Practitioners” last fall. This guidance, which you can find here, aligns with PRSA’s Code of Ethics and offers best practices to prevent and manage ethical challenges arising from the improper use of AI tools. The report, authored by PRSA’s Board of Ethics and Professional Standards (BEPS), results from extensive research, expert input and peer review.


Artificial intelligence (AI) tools have changed the PR and communications profession. They can reduce time and effort devoted to routine tasks and free professionals to focus on higher-order thinking and creativity. But AI tools carry risk, so organizations need to provide guardrails to ensure they are used appropriately.

As more communications professionals are being tasked with helping draft policies for their companies and organizations, here are some points to consider.

AI policies should help your employees use AI tools ethically and securely, consistent with your organization’s values

If your organization uses an internet-based AI tool such as ChatGPT, then data shared will become part of the tool’s training and available to other users. That makes it inappropriate — and possibly illegal — to use customers’ or employees’ confidential information in a prompt. Likewise, if you use competitively sensitive data in a prompt, it will become part of the AI tool’s training and your competitors might benefit.

Some organizations use generative AI tools that do not access the internet or train on the organization’s data. Even then, data input into the tool should be limited to what might be shared in other circumstances. If your firm limits access to client data, then it should not be part of a prompt in an AI tool everyone in your firm can use.

A well-crafted policy for using AI can prevent costly missteps that could damage an organization’s reputation and bottom line, as these recent examples show:

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the judge wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Like attorneys, PR professionals must ensure their work is accurate, whether it is created with AI or not.

Focus areas for policies

Transparency: Consistent with the PRSA Code of Ethics and the Society’s values, many practitioners maintain an image created by an AI tool, especially one that resembles a photograph, should be labeled.

Opinions about labeling text written by AI tools is less settled. Some practitioners liken AI to a speechwriter for an executive, which is rarely disclosed. Others say in the interest of transparency practitioners must disclose when they use AI.

Bias: If the data AI trains on is biased, the AI-generated content will be biased. Regularly audit the training data, if possible, and the AI models for biases. See if you can identify any disparities in how different groups are represented or treated by the AI. And, incorporate human oversight in your decision-making process. Ultimately, AI is a tool and your organization is responsible for its decisions. Review and adjust AI outputs to achieve fair outcomes.

Intellectual Property: Copyright law is dynamic and complex, especially concerning AI-generated content. When using AI tools, it’s a good practice to familiarize yourself with the terms of service or license agreements. These may state whether any content created can be used for commercial purposes. Your organization does not want to be accused of plagiarism or develop content via AI that cannot be protected by copyright.

Safeguarding Confidences: Make certain your organization’s and clients’ information is secure when using AI tools. Your duty to protect information may prevent your organization from using AI in some cases, which should be explained in your policy.

Accuracy: Like an online search, an AI tool may return helpful information. But just as there is false information online, AI tools may produce outputs with mistakes, misinformation or even fabrications, sometimes called “hallucinations.”

Action steps

  • Don’t go it alone! Work with your legal, technology, data privacy, security and communications teams to write your policy, socialize it across your organization and stay informed about new regulations and norms around AI tools.
  • Share the risks of using AI and have intentional discussions about how your organization will use AI tools.
  • Provide examples to illustrate appropriate and safe uses as well as inappropriate uses.
  • Define which roles and functions may use AI tools and their responsibilities so they use it safely and ethically.
  • Consult with your legal team on how to mitigate risks such as plagiarism, copyright infringement, privacy breaches and misinformation.
  • Revisit your policy often to incorporate new regulations and norms. With the pace of change, we recommend reviewing your policy at least annually.

Holly K. Hall, APR, J.D., is a professor of strategic communication at Arkansas State University and a member of the PRSA Board of Ethics and Professional Standards.

Meredith Libbey, APR, Fellow PRSA, MBA, is a member of the PRSA Board of Ethics and Professional Standards and the Accreditation Marketing Committee and is past president of the PRSA Nashville Chapter.

[llustration credit: day of victory stu]

About the author

Holly Kathleen Hall, J.D., APR and Meredith Libbey, APR, Fellow PRSA

Leave a Comment