Thought Leadership

Helping PR Pros Speak the Same Language on AI

Share this!

On the second Monday of every month, PRSA is offering AI Pulse, a briefing hosted by Ray Day, APR, PRSA’s 2026 immediate past chair, that provides timely insights into the latest AI trends, tools and developments. Learn how to stay ahead of an ever-evolving digital landscape here.


“AI, communications and PRSA go hand in hand,” Ray Day, APR, said during the first “AI Pulse” on Jan. 12, the first of a year-long series.

To be held on the second Monday of every month, the briefing on AI in communications will be “a more intimate session” than the Member Mondays livestream events that Day hosted as PRSA’s 2025 chair, he said. “We’re going to go even deeper.”

In his conversations with PRSA members and other communications colleagues, Day — Stagwell vice chair, and Allison Worldwide executive chair — said he’s discovered “a disparity in how we’re talking about AI, and a little bit of fear. People are holding back because they don’t know the correct terms to use or don’t want to seem uneducated. We want to make sure that we’re all speaking the same language, with the same AI vocabulary.”

Panelist Stephanie Parrott is a marketing analytics and insights manager at Google DeepMind, the tech giant’s AI research lab.

“The biggest challenge right now isn’t only the AI technology; it’s understanding what people are talking about when they talk about AI,” she said. “Hundreds of terms are floating around, with new terms coming about every day. It’s hard to keep up, or to know what actually matters.”

Parrott and co-panelist Amanda Carl-Pratt, director and head of communications at Google DeepMind, listed their top-10 AI terms that PR professionals should know in 2026:

  • Large language model — A type of AI trained on vast amounts of data that can recognize, summarize and generate text, and predict the next word that would likely follow in a sentence.
  • “Generative AI” versus “Predictive AI” — Predictive AI makes forecasts by analyzing existing data. Generative AI, on the other hand, creates new content such as text, images or code.
  • Prompt — The specific text instructions or query that a user feeds an AI tool. The quality of the output depends heavily on the specificity, context, clarity and nuance of the prompt.
  • Slop — Low-quality digital content mass-produced by AI, with minimal human oversight or creative effort.
  • Hallucination — When an AI sounds confident and plausible, but generates factually incorrect information.
  • Multimodal — An AI system that can process and generate multiple types of media simultaneously, such as text, images, video and audio.
  • “Inference” versus “training” — During the “training” phase, an AI model learns. “Inference,” on the other hand, is the “job” phase, when a consumer uses the model to answer questions.
  • Alignment and safety — In the context of AI, these terms refer to tweaking an AI model so that it acts with human values and refuses to fulfill harmful requests.
  • Context, or “context window” — The AI’s working memory is the amount of information that it can process and analyze in a single interaction.
  • Thinking, or chain of thought — A newer capability, in which AI pauses to logically break down complex problems before responding.

“The prompt is your primary tool for managing user disappointment,” Parrott said. “From a comms perspective, we think this requires a pivot towards education. It takes time to master the prompts. You can’t just launch a tool; we have to teach people how to speak to it with a very well-crafted prompt that leads to value-added output.” Otherwise, she said, “It’s garbage in, and garbage out.”

Panelist Adam Pratt, director of issues and government communications at IBM, said, “Slop is correlated to the prompt, in that if you have a poor prompt, or a lazy prompt or a non-specific prompt or a prompt that lacks sufficient context, slop is the garbage that you get out of an AI system. Learning how to prompt well is the key to avoiding slop.”

For communicators leaders, AI hallucinations present a reputational risk, Carl-Pratt said. “You really have to manage expectations, and know that these tools are creative partners, but they’re not perfect. I like to say that [AI] is like an eager intern: It aims to please, but it doesn’t always get it right.”

Carl-Pratt advised never “to promise 100% accuracy directly out of the generative AI. You have to make sure that you review it, that you find those hallucinations, [which] you as a practitioner will find better than anyone else.”


Illustration credit: olecnx

 

About the author

PRSA Staff

Leave a Comment