As the metaverse expands, companies will have a massive opportunity for deploying incredibly lifelike virtual influencers (VIs). For example, as VIs proliferate, organizations will establish mascots and ambassadors that will 'live' in the metaverse. As you can imagine, these company-'birthed' VIs should help brands enter a more precise level of messaging and marketing.
Companies will have more choices in the influencer space and, hopefully, increased control. No longer will brands be forced to compete for a human influencer’s time or attention. Similarly, marketers won't stay awake nights hoping human influencers will convey talking points and CTAs accurately.
Presumably, in the near term, companies with larger budgets will enjoy advantages VIs offer. Employing teams of AI, CGI and creative personnel, these companies will 'birth' and 'raise' VIs in their image. In theory, VI 'children' will say and act as company marketers command.
Third-party creators likely will build and control the VIs smaller companies deploy.
More safety but...
Still, it’s assumed all companies, large and small, that deploy VIs will enjoy increased safety and reliability. Unlike human influencers, it's difficult to imagine a VI ensconced in a legal issue, failing to appear at an event or doing/saying something embarrassing.
On the other hand, strategy and intentionality are integral in this equation. As in cinema, where nearly everything actors do and say on screen is scripted, VIs should work from careful plans that dictate every action and piece of content that emerges from them.
An ideal situation has PR pros in the writers’ room as VIs’ actions and words are finalized. At the very least, PR should review VI scripts in advance of their going, well, ‘live.’
One set of rules
In terms of how brands work with VIs, the same rules as dealing with human influencers apply: VI 'contracts' should include a morality clause, built-in basic protections and brand-safety checks.
In addition, a successful VI effort will include managed competitive scope, properly disclosed sponsorships and an orderly, intentional content-creation process.
Although, as noted above, VIs should offer a higher degree of safety than human influencers, the potential for gaffes and impromptu crises exists. This will increase as real-time interaction becomes more common in the metaverse.
The chance that errors creep in likely will come from the presence of too many variables and limited control of VIs in real time. To avoid gaffes and tropes as such, VI managers, ideally with PR input, should ensure VIs’ content feels and looks organic, while still communicating brand narratives clearly.
Moreover, the brand’s CTA should address KPIs and goals while minimizing distractions in VI content. Such distractions include other logos or items that may shift focus to something outside the company message.
Another consideration here is brands’ acceleration in adopting AI. Initially, corporate teams will manage VIs. However, as time passes, companies will want VIs that seem more convincing and exist in multiple places simultaneously.
Similarly, audiences will demand VIs seem ‘real,’ making them worthy of attention and interaction. This will push AI and help accelerate development of virtual interaction ability.
Eventually, the metaverse could essentially become one big Turing test, constantly requiring that VIs ‘think.’
A final consideration in the growing metaverse is transparency. Should government regulation dictate that VIs disclose that they are not real?
It’s a tricky question and requires consideration. On the other hand, does it matter?
First, what is the difference between a paid actor in a show or a cartoon character in a cartoon? Really just the format.
In addition, how do we define real? Is a team of people behind a VI real? How about an individual who manages the VI? Is she real, even though she is operating with an alias?
Similarly, there are different degrees of what real means. For example, should you run into Tony the Tiger in the metaverse, it is safe to assume that you know he is not real.
Perhaps the true disclosure should come down to the individual/organization behind a VI account. It's essentially the same way the App Store discloses developers of each app.
Eric Dahan is co-founder and CEO, Open Influence