
[Editor’s Note: Even experienced PR professionals need a refresher on the basics periodically, as well as insight about newer concepts. Whether it’s how to become a better writer or a review of PR ethics, we aim to provide you with content about a variety of topics and issues. Hence, our Explainer series.]
Previous posts looked at Barcelona Principles, the Metaverse, employee resource groups (ERGs), NIL, social conversation platforms, bounce rate, off the record and sonic branding (amongst others). Today we review the basics of language learning models also known as large language models, and how they influence artificial intelligence.
If there are topics you’d like to see discussed in this series, please let us know.]
What Is a Language Learning Model (LLM)?
In order to be a responsible AI user nowadays, it's important to know the basics of what AI is composed of. If you use anything like ChatGPT, Claude, Google Gemini, Microsoft Copilot or others you've already encountered a language learning model, also known as large language model or LLM.
To put it simply, an LLM is a machine learning model that can understand, interpret and generate human language for further consumption. They do this by ingesting enormous amounts of data—articles, research, chats (pretty much anything ever published or uploaded to the model), and analyze that data to create a responsive environment for the user.
According to IBM these generative responses can include:
- Text generation
- Content summarization
- Sentiment analysis
- Language translation
- and more...
Why it Matters to Communicators
Generative AI can do amazing things for productivity and work efficiency for communicators. However, it's not without its flaws.
The biggest problems with LLMs and generative AI include factual accuracy and bias. A recent UNESCO study shows LLMs tend to "produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men...and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”." This comes from the input of millions of data into the AI systems—by humans—which may tend to sway one way or another.
So what does this mean for those utilizing these tools? Always fact check. Always edit and reread and check for bias. Have others on staff read the results of what is generated to ensure it provides inclusive, correct information. Ensure that your brand or organization does not come out on the wrong end of an AI-generated social post that didn't get a second look.
Monitoring and maintaining your organization's digital reputation is also a concern, according to Brian Snyder, Global President of Digital at Axicom. And for this reason, PR pros should be hypervigilant about all communications. “Start looking with a critical eye at every piece of content that your communications and marketing functions are putting out,” he recommends. “Ask yourself, what questions might this piece of content be used by an AI answer engine to answer? And then optimize your content to serve as a resource for AI answer engines to use in positioning your brand the way that you want them to position it.”
More resources for AI, prompts and language learning:
- PRSA Releases Guidelines for AI Use
- AI Alchemy: Crafting Brand Legacies in the Digital Era
- Customizing AI GPTs to Optimize PR Business Processes
- How Can OODA Impact Public Relations and AI Use?
- How to Avoid Google Penalizing Your AI-Generated Content
- How to Use ChatGPT to Refine—Not Define—Your Media Pitches
Nicole Schuman is Managing Editor at PRNEWS.