Thought Leadership

S&T Live Recap: Companies Need Clear AI Policies and ‘Ongoing Conversations’

Share this!

Many organizations say they have an AI policy, “but then put it on a shelf,” Melissa Flynn, APR, said. To keep up with changes in the technology, a company’s AI policy should “be a living, breathing thing.”

A sound AI policy sets forth clear guidelines for how a business uses AI tools, said Flynn, principal at Melissa Flynn PR & Marketing in Dallas. Companies should train their people on how to use AI and “what’s right and what’s wrong by your organization’s standards,” she said.

Flynn and colleague Pete Pachal were guests on the Sept. 25 episode of Strategies & Tactics Live, PRSA’s monthly livestream on LinkedIn, which further explores topics covered in the Strategies & Tactics publication by its contributing writers. Pachal and Flynn wrote an article about company AI policies for the September issue.

Artificial intelligence can inadvertently ignite controversy for a company, “so part of the reason for having an AI policy is to know what to do when a crisis hits,” Flynn said. “Because AI is still very much unknown.”

An organization’s AI policy is “really a conversation that’s ongoing with your people,” said Pachal, who writes a column on AI for Fast Company and created The Media Copilot, an AI-training course for public relations and media professionals.

An AI policy should spell out to employees why the company uses AI the way that it does. But that ongoing conversation also means “answering their probably-very-thoughtful questions honestly,” Pachal said.

‘AI Hallucinations’ and ‘Shadow AI’

“Dealing with hallucinations” is one of the first things a company AI policy should address, Pachal said, since the technology sometimes presents fiction as fact. Human oversight and “knowing where the buck stops should be a fundamental part of a company’s AI policy,” he said.

Another complication is that employees might be using an AI tool they don’t consider best-in-class, “without an appreciation for why the company policy exists” to use that particular tool, which might be to avoid liability, Pachal said.

As a result, “You’re going to get what’s called ‘Shadow AI.’” People will pull out their phone or personal computing device and “start using proprietary company data on that service without the authorization or the knowledge” of the company. “‘Shadow AI’ is a problem among all industries, including PR,” he said. “It has to be talked out with employees.”

John Elsasser, editor-in-chief of Strategies & Tactics and host of S&T Live, asked his guests about the gap that might exist between what’s written in company AI policies and what actually happens day-to-day, as the technology quickly evolves.

To close that gap, extend the conversation to “all the people you do business with,” Pachal said.

As examples of what can go wrong, he cited cases at Sports Illustrated and the Chicago Sun-Times, in which vendor-supplied content generated by AI slipped into those publications, creating reputational crises for their newsrooms.

In meetings, “AI note takers” might capture a client’s confidential information and then release that information publicly in subsequent responses to people’s prompts. For PR agencies, “We fully expect there will be requests in the future” from clients on how the agency uses artificial intelligence, Flynn said, “to ensure that the material it creates is safe.”

Watch the full episode here.


[Illustration credit: arnab dey]

About the author

PRSA Staff

Leave a Comment