What Communicators Still Don’t Understand About AI

Artificial intelligence (AI) will greatly affect the communications business, whether we understand it or not. Just last week, the Chinese Xinhua News Agency debuted the world's first AI news anchor, a marvel constructed from the facial features of real anchor Zhang Zhao that have been applied to a body template and animated. Xinhua says that the anchor "learns from live broadcasting videos by himself and can read texts as naturally as a professional news anchor."

AI clearly has big plans for communicators, but how do communicators plan to use AI? Mark Weiner, chairman of the Institute of PR’s Measurement Commission and chief insights officer at Cision, says much of what the PR profession thinks it knows about AI is inaccurate. "There’s also confusion in the marketplace about what automation can and cannot do, false claims about the presence of AI, a purposely-blurry distinction between proving PR’s value and generating PR ROI," he told PR News.

While AI is an umbrella term with loads of different applications, the most common application of this tech in our profession involves machine learning. Machine learning is already being put into practice when chatbots are deployed as a social listening tool, along with the email services we use day-t0-day, which offer auto-generated quick sentences while drafting replies.

A New York Times Magazine piece on the email auto-reply suggests that this tech begins with Clippy, the sentient paperclip who leaped out of Microsoft Word to offer automated assistance. The piece also warns that automation should not automatically be associated with efficiency. "Constant sweeping changes in office communication—from speaking and writing to phones and printing to emailing and instant messaging—do not tell a tidy tale of increased efficiency or decreased workload, even as they represent progress," writes the Times Magazine's John Hermann.

"Self-automation can free us only to the extent that it actually belongs to us," he concludes. "We can be sure of only one thing that will result from automating email: It will create more of it." To this end, communicators must actually figure out: What part of this emerging tech really does belong to us? And insofar as AI remains an umbrella term, how is it helpful for our business?

"Using AI for the sake of looking impressive might wow people in the short term," Dave Gershgorn, lead AI reporter at Quartz, told PR News, "but only companies that have clear objectives and use cases for AI are the ones making full use of the technology."

In conversation with MIT Technology Review, Andrew Moore, Google's Cloud AI boss, echoed a similar sentiment. Likening the integration of AI into businesses to the technology of electrification, Moore notes that electricity didn't change the way the world operated for two to three decades. "Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of 'magic dust' that you sprinkle on an organization and it just gets smarter," he said. "In fact, implementing artificial intelligence successfully is a slog."

Moore goes on to say that once the brain starts breaking down business problems into the traditional components of AI—perception; decision making; action—Google Cloud can map those onto different parts of the business.

Moore also said that customers often bring him a massive load of data, assuming there's some value just waiting to be mined from it. "What you really need to be doing is working with a problem your customers have or your workers have," Moore says. "Just write down the solution you’d like to have; then work backwards and figure out what kind of automation might support this goal; then work back to whether there’s the data you need, and how you collect it."

The big thing that Moore keeps returning to is the idea of working backwards from an identified problem and figuring out what you want to solve, then only worrying about the data you need to solve it afterward. He also stresses that, though AI is about using math to make good decisions, it currently has nothing to do with simulating actual human intelligence.

"Once you understand that, it kind of gives you permission to think about how a set of data tools—things like deep learning and auto machine learning and, say, natural language translation—how you can put those into situations where you can solve problems," he says. "Rather than just saying 'Wouldn’t it be good if the computer replaced the brains of all my employees so that they could run my company automatically?'"

To break it down further, MIT Technology Review whipped up a handy little flowchart to help make sense of AI's constantly evolving definition.

Follow Justin: @Joffaloff