Contact Information

1011 9th Avenue SE Suite 300
Calgary Alberta T2G 0H7
Canadá
Telefone: 403 262 3006
Email:

Dianne Wilkins

Dianne Wilkins

Chair
Chris Gokiert

Chris Gokiert

Chief Executive Officer
Grant Owens

Grant Owens

Chief Strategy Officer
Lee Tamkee

Lee Tamkee

Chief Financial Officer
John McLaughlin

John McLaughlin

Chief Operating Officer
Diane Heun

Diane Heun

Executive Vice President, Business Development

Basic Info

Competências Essenciais: Publicidade/serviço completo/integração, Mobile Marketing, Social Media Marketing, E-Commerce, Optimização de motores de busca, Web Design, Serviços de marketing, Experiential, Branded Content/Entertainment, Investigação de mercado / assessoria, Marketing Technologies/Analytics, Compra e planeamento de media, Design, Visual/Sound Identity, Branding/Celebrity endorsement, Planeamento estratégico

Fundada em: 1996

Empregados: 1500

Prêmios: 27

Trabalho Criativo: 45

Clientes: 16

Competências Essenciais: Publicidade/serviço completo/integração, Mobile Marketing, Social Media Marketing, E-Commerce, Optimização de motores de busca, Web Design, Serviços de marketing, Experiential, Branded Content/Entertainment, Investigação de mercado / assessoria, Marketing Technologies/Analytics, Compra e planeamento de media, Design, Visual/Sound Identity, Branding/Celebrity endorsement, Planeamento estratégico

Fundada em: 1996

Empregados: 1500

Prêmios: 27

Trabalho Criativo: 45

Clientes: 16

Critical Mass

1011 9th Avenue SE Suite 300
Calgary Alberta T2G 0H7
Canadá
Telefone: 403 262 3006
Email:
Dianne Wilkins

Dianne Wilkins

Chair
Chris Gokiert

Chris Gokiert

Chief Executive Officer
Grant Owens

Grant Owens

Chief Strategy Officer
Lee Tamkee

Lee Tamkee

Chief Financial Officer
John McLaughlin

John McLaughlin

Chief Operating Officer
Diane Heun

Diane Heun

Executive Vice President, Business Development

In a world of deep fakes and Google Duplex, how far should brands take AI?

 Have you noticed that AI has finally passed into everyday reality? No one needs to reference science fiction apologetically to talk about it. It’s plainly here. It’s plainly real. And now that it’s real, our questions about it need to get real in response.

No, I don’t mean, “Will the robots rise up and take over?” Rather, now that AI is growing up and gaining power, our questions have to grow up with it.

The effect on brand experiences
Here’s one question to start: Do we, as an industry, have an obligation to think about how using AI will impact not only the brands we work for, but our brands’ audiences and the talent we sometimes use to engage them? For example, when Google debuted its Duplex platform and made an actual phone call with it, the voice (complete with “ums” and “mmm-hmms”) sounded utterly human. Since then, Google has begun to talk about how, when placing calls, Duplex will have to disclose that it is not, in fact, human. That’s important. Not to do so would be ethically questionable (and in some US states, likely illegal).

As brand marketers follow suit and present equally lifelike experiences for consumers, what obligations will they have to remove the human mask and reveal the machine at work? It’s a question we need hard, honest answers to. Because while academics, scientists, and PhDs are ushering in advances in AI itself, we’re the ones putting it in people’s hands, devices, homes, and newsfeeds.

When AI crosses the line, and the political aisle
Case in point: In a recent YouTube video, former US President Barack Obama called current US President Donald Trump an, “unqualified dipshit.” How about that? You may agree, you may disagree. But we can all agree on one thing: The video was fake. We know that because the people who made it told us it was. But our eyes and ears — they didn’t quite know the difference.

The video was produced using an AI-driven technology called “deep fake,” which allows amateurs to use open-source software to create convincingly real audio and video with very little time and effort. The “President Obama” video was actually voiced by actor Jordan Peele, as revealed at the end of the clip.

A new kind of identity theft
Deep fakes first came to prominence on Reddit. There, users were doing something more pernicious than political mudslinging; they were superimposing actors’ faces onto pornographic video clips. With much fanfare, Reddit banned the posting of such videos because of the deep and self-evident ethical boundaries they violate. Even pornographic sites followed suit and banned deep fakes, drawing a hard line in the sand in an industry that most people wouldn’t quickly associate with ethical or moral standards.

Using a different, but related, technology for “Rogue One: A Star Wars Story,” Disney resurrected the characters of Grand Moff Tarkin and a young Princess Leia. The latter (Carrie Fisher), had been able to consent to the resurrection; the former (Peter Cushing) had been deceased for many years.

So far, here’s the running tally on AI-driven artifice: 1) a multi-industry rejection of deep fakes as soon as they became pervasive 2) complex, theatrical manipulations that, so far, are not quite perfect and have been the sole preserve of powerful entertainment companies with nearly limitless financial and technical resources.

So what happens when the technology further improves (which it will) and becomes accessible to marketers and brands (which it always does)? Imagine a casting call where a dozen actors are digitally and convincingly superimposed on a stand-in model prior to engaging the actors in real life. Or imagine trying out ad copy with perfectly synthesized voiceover by actors whose voices are being digital reproduced and don’t even know they’re saying what they seem to be saying. Imagine promoting a product you’d never use or a cause you abhor.

Without consent, are these practices ethical?

(Note: I’ve only been talking about one small category of AI and ethics. There are other, bigger topics I can’t cover properly in the space of a short article, such as AI’s ability to propagate insidious societal biases.)

Such questions and debates are urgent. There are now companies who claim to use AI to impact how we think, exploiting the human mind’s weakness for instant gratification. With large data sets, these companies want to exploit how we mediate motivation and desire. While some companies claim to be acting on the side of good, selling their wares as fitness and education tools only, the question remains: Should we use AI to optimize and exploit physiological responses in order to impact a consumer’s behavior?

Because that power will increasingly come into our grasp. We’ve already seen fallout from this in rudimentary AI, like fake news bots. We’ll soon be able to multiply their impact by many orders of magnitude. As companies acquire more data (abiding by platform terms of service or not), our ethical purpose in using that technology is far less certain. As Reddit and even pornographic sites have shown, bad ethical behavior can be combated.

As an industry, are we really okay exploiting people — their likenesses, their digital environments, their mental sovereignty — in order to squeeze out every last ounce of profit, regardless of the ethics? Perhaps these questions and conversations, though made urgent by AI, have been with us for longer than we’d like to think.

Ricky Bacon is Group Technology Director for digital experience design agency Critical Mass in NYC. 

Read more here: https://venturebeat.com/2018/06/09/in-a-world-of-deep-fakes-and-google-duplex-how-far-should-brands-take-ai/