Skip to main content
SAGE
Search form
  • 00:00

    [MUSIC PLAYING][AI & Marketing-- Understanding Humans Better][APPLAUSE]

  • 00:15

    STUART SHERMAN: So I have an atypical presentation,so I welcome you to get out your camera phones and plagiarize.I normally talk about all sorts of structural thingsand very specific things.[Stuart Sherman, CEO, IMC]But understanding that you're academicsand you're probably going to end up talking with students,

  • 00:37

    STUART SHERMAN [continued]: I thought I would bring you some of my favorite slidesto talk to students about and share them with you.Some of them involve audience participation,so I hope that you won't feel shy about it.Just to challenge London, I did a talkin Panama City, Panama, a few weeks ago and to 1,000 bankers.

  • 01:02

    STUART SHERMAN [continued]: It was the SWIFT Conference.And every single person participated.It was so loud, I was shocked.So it's just a challenge.Oops.Something means forward.There we go.[imc SINCE 2004.TORONTO, LONDON, LEEDS, CHANGSHA.80+ PEOPLE]So just a little bit of background.I think John described me a little bit,

  • 01:24

    STUART SHERMAN [continued]: but I run a company called IMC.We've got offices in Toronto.We had offices in London, which we've nowmoved head office to Leeds.And we have a large office in Changsha, China.So I actually know quite a bit about China.And we're over 80 people.We're actually probably closer to 100.It's an old slide but, we recycle.

  • 01:46

    STUART SHERMAN [continued]: [SI, SCALED INSIGHTS, equals BAI]I'm going to talk to you a little bitabout one of our spin-outs, because one of the things we dois we spin out companies called skilled insights.And behavioral AI is a key area of interest of ours,and mine personally having been both a marketerand having an industrial psychology background.And it's really about using AI to understand humans.

  • 02:09

    STUART SHERMAN [continued]: [What is AI?A passive tool that helps us see?]So it always starts because people say, OK, what is AI?And then we start talking about, OK, so is machine learning AI?What is this?What is that?Ultimately, I wonder about this.Is it really just a passive tool thathelps us see things better?Does it help us understand things that are in the data?Or is it something that gives us new insights into things

  • 02:31

    STUART SHERMAN [continued]: that we can already see?And I have this picture here because the proofis that in looking for lung cancer, AI is now up to 93%effective, when a typical radiologist is in the mid 50s.So it's amazing where this is going, and it's quite shocking.

  • 02:55

    STUART SHERMAN [continued]: And we're working with the NHS on quite a few thingsaround that.So I thought about, how do we start this is a conversation?And how would you talk about it with others as well,either at dinner or in a lecture?So I like to start with evolutionary psychologyand how we think.Now, I stuck in two extra slides that don't match this slide

  • 03:19

    STUART SHERMAN [continued]: show because I'm well-known for these slides,and I think they're important.And this was literally a last-minute add.But I like to describe this because we as humanstend to categorize things.And a lot of what I'm going to talk to you about todayis about categorization.

  • 03:40

    STUART SHERMAN [continued]: So I want to give you an example of how we categorize thingsand do a really bad job at it.I often say I'm a deeply flawed human being,but that's because everybody is.So here are two people.They're demographically identical.They were both born in 1948, in fact three weeks apart.They both have homes in London and spend quite a bit of time

  • 04:02

    STUART SHERMAN [continued]: there.They're both top one-percenters.They're also both world famous, or I wouldn'tbe using them as an example.They're both male.They've both been married twice.They're both Anglican, and they both love dogs.So geodemographically, we agree they're the same--

  • 04:22

    STUART SHERMAN [continued]: you'd put them in the same bucketif we were creating persona maps or something like that.We agree they're the same?So one of them is Prince Charles, Prince of Wales,and the other one is Ozzy Osbourne,the prince of darkness.[LAUGHTER]So when we start thinking about them, their--Prince Charles idea of fun, we've got a pretty good idea.

  • 04:44

    STUART SHERMAN [continued]: Ozzy Osbourne, you don't have to be creative.So what I think the opportunity iswith technology in the future is to start slicing this apart.So I'll finish with how we get there.But in the beginning and in the middle,I'll take you through a little bit of a journey,because when I talk about us being deeply flawed,

  • 05:04

    STUART SHERMAN [continued]: I'm going to play a magic trick with you.So I want everybody here to pick one of these cards.Pick a card, any card.And I want you to know that I'm doing magic in PowerPoint.Now, that's a skill.Ready?Got your card?Your card is gone.Am I right?

  • 05:24

    STUART SHERMAN [continued]: OK.So how did I do it?Well actually, they're two different decks.So what happened here?Well, what happened is I got you-- oh, by the way,if you have a Jack of hearts, I just blew your mind.But we got you to focus.And what happens is when humans focus, we focus on one thing.

  • 05:45

    STUART SHERMAN [continued]: And we generally-- that focus causes usto ignore the things around it.And the beauty of AI is that it doesn't suffer from that.So it doesn't really focus.It focuses on what we tell it to, whichis actually quite different.And so we can teach AI to do thingsthat humans just can't do.

  • 06:06

    STUART SHERMAN [continued]: And the reason is that it doesn't have that focus.So let's take that and go into stereotypes.And I love it.I suggest that brands are manufactured stereotypes.And if we went back 250,000 years ago, tiger, big teeth.

  • 06:27

    STUART SHERMAN [continued]: Shark, big teeth.Sharks have their own theme music, right?We're afraid of them.So if I'm George Lucas and I'm tryingto create something for Star Wars 17on the planet Remulak in the far off galaxy of whatever it'scalled, I create something, and I just give it big teeth.

  • 06:49

    STUART SHERMAN [continued]: And the expectation is there.And so if you think about it in branding,we're actually activating the same aspects of the brain.We're doing the exact same stuff.So when we think about this, the brainitself is really high maintenance.And some of you will be very excited to hear this,that 20% to 30% of the calories that you're going to burn today

  • 07:13

    STUART SHERMAN [continued]: are going be burned by your brain.Isn't that awesome?And that's every day.So think more.And actually, you've experienced this.[LAUGHTER]You've experienced this because you'vedone things that are high cognitive load,and afterwards you're exhausted, and you probably

  • 07:33

    STUART SHERMAN [continued]: crave some junk food.True?So this is natural.We evolved this way.And so when you think about it for human choice behavior,it is exhausting.So if you've ever been to a shopping mallas a North American--I'm a Canadian, by the way, not an American.Very careful to say that.

  • 07:56

    STUART SHERMAN [continued]: You go to a shopping mall or the high street,you spend a bunch of time there, and you're exhausted.So when we think about this in termsof how people make decisions, Amazonmight be less exhausting if you can find what you want quickly.And AI being able to enable you to do that and understandthings faster is something that we want to do.

  • 08:17

    STUART SHERMAN [continued]: Now, we all have these devices which are the computerthat you heard about in the '50s thatwas going to control your home.I can adjust the lights in my house right from hereand change the music in my house from here.And I can talk to it, and it will do it.So if that sounds like a 1950s sci-fi show or whatever,

  • 08:38

    STUART SHERMAN [continued]: it's here.It's just not quite exactly how Ray Bradbury imagined it.But we have it.And we've offloaded huge amounts of our brains to this.So as we've evolved, we've gone for this whole conceptof offloading capabilities and creating shortcuts, heuristics.

  • 08:59

    STUART SHERMAN [continued]: And the reason we do it is that becauseof this 20% to 30% calorie burn, before there was Sainsbury's,you actually had to go and hunt and gather.And therefore, if you were built for efficiency,you were much more likely to survivethan if you were wasteful with your mental powers whichis burning so many calories.

  • 09:20

    STUART SHERMAN [continued]: So as you start to think about it in this context,let's play a little game.And this is audience participation time number one.Or actually, number two.What I want you to do--I'm going to show you words.So this is the word "orange," but it's written in blue.So I don't want you to read the word.I want you to say "blue."

  • 09:41

    STUART SHERMAN [continued]: Everybody.

  • 09:42

    AUDIENCE: Blue

  • 09:43

    STUART SHERMAN: Excellent.Let's go.

  • 09:45

    AUDIENCE: Orange.[CHATTER][RedBlackOrangeG reenRedBlackOrangeGreenBlueGreen OrangeBlueGreenBlueRedBlackOrangeBlueGreenRed]

  • 09:54

    STUART SHERMAN: It's hard.Eh?So what's happening?So what's happening is that there's actuallytwo parts of your brain that are processing this.One part of your brain recognizes color,and the other part of your brain reads.And they're two different parts of your brain.So what you have is what's called a neural parliament,

  • 10:17

    STUART SHERMAN [continued]: if we were talking with neuroscientists.And the neural parliament has a little bit of a fight,hopefully decides to stay part of the EU,and produces the winner.And the winner is the color orange for the first wordthere.Now, if I was going to teach an AI to do this,

  • 10:38

    STUART SHERMAN [continued]: I would just have it recognize color,and I wouldn't have an AI that could read.Can you imagine how fast that would be?So efficiency and the way AI is changing our worldsis that it actually has a very narrow sliceof our capabilities instead of a broad slice.So all of these people who say, oh, I'mworried about Skynet and AI is going

  • 10:59

    STUART SHERMAN [continued]: to take over, well, right now, we'rein the small slice business with AI.And so everything that we're analyzing,we're trying to find patterns in small slices.So let's talk about this in terms of categorization,because this is really most of what we do as humans.We're pattern recognition machines.And it happens to be what we do as--

  • 11:20

    STUART SHERMAN [continued]: what we do with AI.So when you meet somebody new, yougenerally categorize them into one of the following four--like, dislike, indifference, or mate.And we generally make that decision extraordinarily fast.

  • 11:41

    STUART SHERMAN [continued]: I'm sure you've experienced it.Hopefully, you haven't fallen into indifferent or dislike.The other two are pretty good.But you've got a 50-50 chance if you look at the odds.So if we categorize people into buckets,let's talk about buckets of dogs.I like dogs.They're more fun.They're much cuter than people.So I have a theory, and we're proving it out with AI

  • 12:06

    STUART SHERMAN [continued]: that we've built, which is very importantto say this is not eugenics.Very, very clear.Not eugenics.Herding dogs.So if I was going to say there's a herding dog,like let's say an old English sheepdog--we're in England-- you could also

  • 12:26

    STUART SHERMAN [continued]: say there's a German shepherd, whichalso has herding behavior.And there's a border collie.And there's an Australian shepherd.And I think that's a Bouvier.Oh no, that's a Rottweiler, which is another herding dog.So these dogs are genetically different,but mentally the same.So they all have the same behavior.

  • 12:46

    STUART SHERMAN [continued]: And it doesn't matter.You can have a litter of Old English sheepdogs.They are going to want to organize things.They'll chase your children around.They'll herd sheep.They'll do whatever-- whatever it is,they will show off herding behaviors.Just like terriers are very independent dogs.They want to dig in your back garden.They'll do all sorts of stuff like that.

  • 13:08

    STUART SHERMAN [continued]: And retrievers, we know how they behave as well.So my theory is that humans fall into these categories as well,but they distribute normally across everything that wewould call races and across everythingthat we would call cultures.But some people are retrievers, and some peopleare herding dogs.

  • 13:29

    STUART SHERMAN [continued]: And so when we look at this, we as humans suck at this.We're not very good at differentiation.So let's do a little experiment here.[But We're Good At Differentiation]Can you tell which is the muffin and which is the Chihuahua?[LAUGHTER]How are we doing?Let's go.[CHATTER]

  • 13:54

    STUART SHERMAN [continued]: Pretty good.But a little bit of a challenge, especiallywith the ugly chihuahua there.A chihuahua only its mother could love.So this is a really interesting thing,because it's a very hard challenge for AI.And in order to teach AI to do this,you have to show it over 6,000 chihuahua pictures.

  • 14:15

    STUART SHERMAN [continued]: And yet if I took a two-year-old,I could teach it to recognize a chihuahua--what, three chihuahuas?Maybe even one or two.It would probably recognize them as dogs.And they would also, if you showed ita cat that looked kind of chihuahua-ish,it would say cat, the two-year-old.Not really an it.I understand.So what's happening with AI is that the error rate

  • 14:37

    STUART SHERMAN [continued]: is dropping over time.So the AI is getting smarter.And then, let's go back to this little device in my pocket.So some of you are close to my age.Who remembers 20 years ago?How many phone numbers did you have memorized?Would you say over 10?

  • 14:55

    AUDIENCE: Yes.

  • 14:56

    STUART SHERMAN: Yeah?How many phone numbers do you have memorized today?Offloaded.So we offload our capabilities.So there's a great term for this.This is all into neuroscience, whichis one of my favorite areas.And they use the word "umwelt."So "umwelt" actually is a German word, and it means environment.

  • 15:18

    STUART SHERMAN [continued]: And when neuroscientists talk about umwelt, what they meanis the environment that an organismbelieves it's in based on the sensory input that it receives.So for example, you all see in the visual light spectrum.And it's hot in here, so I'm not wearing my jacket.So you've made some judgments of me based on the way I'm dressed

  • 15:39

    STUART SHERMAN [continued]: and how I look.And you may decide I have a fiery temper because I'ma ginger and blah, blah, blah.I understand.But if you saw on the x-ray spectrum,you may judge me entirely differently.You may look at me and say, you know what?He's got some very honest looking femurs.So as you start to think about this,

  • 16:01

    STUART SHERMAN [continued]: we're now starting to plug things into our own umwelt.So this is my umwelt expander.My memory's been offloaded.I now have a much larger memory.How many apps do you have?I've got over 100.The things that I can do, the reach I have--I can tell what people in Toronto are doing today.

  • 16:21

    STUART SHERMAN [continued]: So my umwelt reaches all the way over to Toronto.If I want to go somewhere in traffic, I use Google Maps,and I can see miles ahead of me.I've expanded my umwelt with technology.And with Artificial Intelligence,we're going to see more and more of this.And then, to bring this back to marketing,it's a question of what the company, what the brand can

  • 16:43

    STUART SHERMAN [continued]: deliver that removes load from me, that expands my umwelt,because this is my new norm.The new norm for every consumer is,how can this, whatever this is, make my world bigger?Where does it fit?These are very important thoughts,because in China last August, there was a big meeting,

  • 17:05

    STUART SHERMAN [continued]: and the head of Alibaba, head of Baidu,Tencent, all of the big companies,Huawei, all got together and theydecided that the future of the internet wasn't the internet.It was AI delivered over the internet.So do you remember ethernet?Yeah?We don't talk about ethernet anymore.We talk about Wi-Fi.

  • 17:25

    STUART SHERMAN [continued]: Ethernet's disappeared.Wi-Fi plugs into ethernet, but we don't talk about ethernet.Now we're not even talking about Wi-Fi.Wi-Fi is like a basic human right.Cut off the Wi-Fi, it's like, how could you?So the layers-- we just keep on adding layers and addinglayers.Nobody talks about TCP/IP, which is the protocol that

  • 17:46

    STUART SHERMAN [continued]: runs the internet.And most people can't even spell it.So it becomes this question of, is AI actually dangerous?So we start thinking about this.And people say, do guns kill people?I'm not sure.The Adam Smith analogy is the hammer,where you could use a hammer to build

  • 18:08

    STUART SHERMAN [continued]: a house for a homeless person, or youcould knock somebody on the head with it and take their purse.It's a choice of how you use it.So this is the question that I have about this, whichis really ethics.And my theory is that there's actuallya variation with ethics, and I would argue prospect theory--

  • 18:29

    STUART SHERMAN [continued]: Kahneman and Tversky won a Nobel Prize for this.This applies to ethics in general.So would you steal food from Tesco?Right now?If you were hungry?If your children were starving to death?How are we doing?You're going to start stealing, right?So what conditions would you kill under?

  • 18:50

    STUART SHERMAN [continued]: When you start thinking about this,the lines are not as black and white as we think they are.And when we talk about ethics with AI,it's really important to understand that a lot of thisis culturally dependent.So here we are in the land of the NHS,and people believe that their health information is privatebecause we're Westerners, that my health information belongs

  • 19:10

    STUART SHERMAN [continued]: to me.I don't want you to know about my arterial sclerosis or anyof the strange itchy diseases I may have had in my life.However, if you lived in China, youdon't have the same sense, because societally, youwere brought up differently.It's a much more collective society.So in China, when you go into the doctor,you have no expectation of privacy

  • 19:31

    STUART SHERMAN [continued]: of your medical records.So who's going to be faster?Who's going to be further ahead in developing AIthat analyzes humans at scale?China.They've got more humans, and they've got more data.So as we start to think about this, we have AI--these ethical AI related gaps.And we have to think about them a lot.

  • 19:53

    STUART SHERMAN [continued]: And we're starting to see this a lot more.The Netflix analogy is one of my favorites.I assume everybody watches Netflix.And you see a little cover art thumbnails that you get.So those cover art thumbnails aregenerated by what you watch.So they're actually a key piece of the movie

  • 20:15

    STUART SHERMAN [continued]: that the AI thinks will be the most enticing to you.So what happens?In Berkeley, California, a whole bunch of Netflix engineersget together, and they design the AI that figures outwhat you watch and figures out what key frame of the movieis going to be the one that's going to be enticing to you.Now, Berkeley, California, bunch of engineers,

  • 20:36

    STUART SHERMAN [continued]: probably mostly white.It's my guess based on the outcome,because the outcome was that people started complainingthat the AI was racist.Why?Because black people tend to watch more movieswith black actors in them.And there's a whole genre of that,just like Indian people watch movieswith more Indian people in them, and I

  • 20:57

    STUART SHERMAN [continued]: try to watch more movies with Ed Sheeran.[LAUGHTER]So the point of this is that whathappened was movies like Good Will Hunting featurea black janitor for about three minutes and four lines.And yet black people were gettinga key frame of the movie Good Will Hunting

  • 21:17

    STUART SHERMAN [continued]: with the black janitor on it.Wow.So is the AI racist?No.AI can't be racist.It's a computer program just making decisionsthat humans programmed it to do.Is it acting racist?I'm not even sure, because it's actually accurately predictingwhat you would be interested in.But you're not interested in three minutes of it.

  • 21:39

    STUART SHERMAN [continued]: So what went wrong there actuallywas that AI wasn't finished before itwas rolled into market.And this is a real risk that brands have.What they didn't do is they didn't put a couplemore statements in it.It needed a, if this person watches these kind of moviesand there is a black actor in it and that black actor is

  • 22:02

    STUART SHERMAN [continued]: in more than 20% of the movie and has a speaking role,then feature it as the cover art.Not hard.But nobody thought about it.Just they rolled it out in a hurry,and they weren't thinking.It wasn't tested.Nobody thought about that.But it did do a lot of damage to Netflix brandfor a short period of time.And I do not believe in the Trump theory of all publicity

  • 22:25

    STUART SHERMAN [continued]: is good publicity.Some publicity sucks.So let's talk about what AI can do a little bit.So here's a fun one.Sexuality.So deep neural networks are more accurate than humansin predicting sexuality of people based on their faces.I'm sure we've all been in this situation before.We were talking to somebody, and you're not

  • 22:46

    STUART SHERMAN [continued]: quite sure of their sexuality.And today, we actually have this including gender now,so it's even broader.Well, believe it or not, machinesare actually very good at guessing if you're a homosexualor not.So should a computer out you, though?Are they entitled to?What does that mean?

  • 23:09

    STUART SHERMAN [continued]: Who has the right to do that?And what if the computer's wrong?We tend to trust computers.So we're going to assume the computer's right,and yet it could be wrong.So we get into these questions of, is sexuality private?And what can you do if it can be seen on your face?

  • 23:31

    STUART SHERMAN [continued]: So let's talk about this in terms of identity.And this is something that's happening in China,and you've probably heard about it.But it's actually happening in England too.There's CCTV in London, and they'reusing face tracking technology.So this is when you hear about the guy who stabbed somebodyand they catch him.It's because they followed them from one CCTV camera

  • 23:51

    STUART SHERMAN [continued]: to the next to the next to the next to the next.How did they do that?Facial recognition software.So what happens when you do this?In China, they're looking for asocial behavior.Did you jaywalk across the road?Did you litter?Did you do any of these things?So what happens when you put it into retail?And retailers are starting to do this.

  • 24:13

    STUART SHERMAN [continued]: So I walk into the store.Hello, Mr. Sherman.It's great to see you.Wow, they recognize me.Or were they using facial recognition, going to a tabletthat the sales clerk's holding, and now they know me?How acceptable is that?And what if it's a false positive?Hi, Mr. Sherman.It's great to see you.I'm not Mr. Sherman.Oh.

  • 24:33

    STUART SHERMAN [continued]: Not so bad.But what if it's giving me information--sorry.What if it's giving the sales clerk information about methat's inaccurate?So we start to look at this.And what if we could start lookingto see your body language to see if you're just browsingor if you plan to buy?We're working on that.Policing.

  • 24:55

    STUART SHERMAN [continued]: And this is real.Using Google Glass type technology,police are able to get information.You might have seen a lot of movies and TV showswith the cyborg or whatever, and yousee the inside of what they're seeing and data is coming up.This is becoming a real possibility.And so what happens if you put it into a store?

  • 25:18

    STUART SHERMAN [continued]: What if this is part of customer service?What if the Google delivery person or Amazon deliveryperson has this as they're coming up to your house?How intrusive?So not only that, but tracking.Believe it or not, this is software from China.I visited them.It is called Skynet.

  • 25:39

    STUART SHERMAN [continued]: Clever.I'm not sure they really knew what they were doing there.But anyway.So it's following you around, and it'supdating a database of where you've been.So now, let's think about this in retail.Imagine that you're H&M and you could actuallystart heat mapping people as they move through your storeand track what they're picking up and looking at and putting

  • 26:01

    STUART SHERMAN [continued]: down versus carrying to the change room versus coming outof the change room and actually going to the till?This is really powerful.You can see how this is amazing data.But is it good?Do you want to be on it?So let's talk about AI knowing you.So here's a computer based personality.There's a citation there.

  • 26:22

    STUART SHERMAN [continued]: Yes, the person's name is actually Yuyu.[Can AI know you?Computer based personality judgmentsare more accurate than those made by humans]And they published this in 2015.And it is true.So let's talk about human IO, input/output.So how do we obtain data from computers?Well, we have sound.It bings at you.

  • 26:42

    STUART SHERMAN [continued]: It does something.You have visual, and you touch.Your phone vibrates.It's got taptic or haptic sensors and things like that.So let's talk about human data interfaces.So we've got body language.See what I'm doing?Confident.Scared.We have facial expression.

  • 27:03

    STUART SHERMAN [continued]: Happy.Worried.I know something.And we have speech.So for speed, the fastest thing you can dois recognize somebody's body language or facial expression.You get a really quick idea of what they're thinking about.For quantity of data I can pass, body language,you get a little bit.Facial expression, you get more.

  • 27:23

    STUART SHERMAN [continued]: If I talk to you, you can really know what's going on.And generally, when you see a friendand they look a little depressed because of their bodylanguage and facial expression, you say, tell me all about it.So we revert to speech.And then, for reliability--I hurt my leg, and I'm standing like this.So what does that mean?And facial expression.Well, we don't all have the same facial expressions.

  • 27:45

    STUART SHERMAN [continued]: My EA is from Goa, and when she's telling mesomething is going to happen, to me it lookslike it's not going to happen.And I had to actually retrain my thinking to make surethat I could understand that.So it's not necessarily reliable,but speech is highly reliable.

  • 28:06

    STUART SHERMAN [continued]: And then, when we talk about fidelity,it's the quantity of data that I can throughput quickly.And the answer is that speech is the best.So I would argue that if eyes are the windows to the soul,then speech is the doorway to the subconscious.You flow speech out of your subconscious mind.I'm not really thinking about what I'm saying right now,

  • 28:29

    STUART SHERMAN [continued]: it's just kind of spilling out.I hope you don't mind.But this happens all the time.We also have terms for it.A Freudian slip.Things like that happen.Two guys are talking, and the guy says,I meant to ask for two--I meant to ask the lady for two tickets to Pittsburgh,and instead I said two pickets to Tittsburgh.

  • 28:50

    STUART SHERMAN [continued]: And the man says, you know what?I had the same thing happen to me this morning.I was having wife--sorry, I was having breakfast with my wife,and I meant to say, sugar--sorry.Honey, will you pass the sugar?And instead I said, you bitch, you ruined my life.[LAUGHTER]So this this does happen.So where I see it is with neuro linguistic pattern

  • 29:12

    STUART SHERMAN [continued]: matching, which is basically the modern versionof thematic content analysis, which is really lookingat the way you use language because it indicateshow you see the world.And so if you hear somebody and they're speakingand they're positive, you know that they're a happy person.And when they're using negative terms,you know that things aren't good.

  • 29:32

    STUART SHERMAN [continued]: And there's some really interesting studies recently.People before they actually get schizophreniastart using words around sound metaphors a lot more,which is quite fascinating.And so we do this as humans.So you're talking to your friend on the phone.You hear a whole bunch of words, and you put their state.

  • 29:52

    STUART SHERMAN [continued]: Or you'll actually put a new person that you meet.So for example, you probably alreadyhave a sense of my personality and what I'm like.You put me in a bucket.You probably know somebody who's a lot like me,so now you've clustered me.You might not know somebody like me.If you do, I'm sorry.So let's think about this.Back to the dogs.

  • 30:13

    STUART SHERMAN [continued]: And a retriever is going to like ducks.A herding dog is going to like sheep.So we're going to come to conclusions.I assume that none of you have had the displeasure of meetingDonald Trump, but we could all predictwhat he's going to do if I gave yousome theoretical circumstance.We know what-- you could probably write a tweet for him.

  • 30:37

    STUART SHERMAN [continued]: So what we're doing is we're getting computers to do it,and computers do it a lot more reliably.And the reason that computers do it more reliablyis that, essentially, humans do it poorly.So you see a guy driving a Rolls Royce.He looks really disaffected.You look at him and you go, snob.

  • 30:57

    STUART SHERMAN [continued]: But he's actually the chauffeur returning the car to the garagebecause he's been fired.We make these assumptions all the time.So how do we do it?We take language and a known outcomeand we have an AI that correlates them.And if there's a correlation from a languagesample to a known outcome, we have success.

  • 31:19

    STUART SHERMAN [continued]: So by way of example, we're doing a studywith the NHS Trust in Suffolk, and we'relooking at people who are taking a weight lossprogram, like a Weight Watchers or Slimming World,depending on which side of the pond you're from.And they have about a 27% success rate of peoplethat they bring into the program.And they do a prescreening interview.So we're taking this prescreening interview.

  • 31:40

    STUART SHERMAN [continued]: We're processing it, and we're looking at,what are the unique characteristicsof people's language use that indicates success?27%.And then, what are the unique indications of languagethat indicate that they would fall into the 73% failure?And then, what clusters exist around failure?

  • 32:01

    STUART SHERMAN [continued]: Which is really important because youwant that language outcome equals failure.And then, what we expect to find is clusters.So instead of having 73% that-- we'renow playing the parable of the blind men and the elephant.They take three-- it's an old Chinese parable.They take three blind men, put themin a room with an elephant.Each one grabs a piece of elephant,and they say, describe an elephant.

  • 32:23

    STUART SHERMAN [continued]: You didn't get an elephant.So what we're looking at is how do youtake this 73% And instead of ringing up 20 of themand saying, why didn't you come?And they say, well, it's Wednesdays.I don't like Wednesdays.Why?It's a group.I don't like groups.What we do is we plot similar people,and then you do the anthropology on each one of those groups.

  • 32:45

    STUART SHERMAN [continued]: And that becomes incredibly powerful.So this is what it looks like.[BENEFITS. FAILS]I probably should've just used the slides.So we've got the clusters of people who would benefitand the clusters of people who would fail.This isn't the real data.[GOOD. ANTHROPOLOGY NEEDED]And where it's good, you don't have to do anything.And where it's failing, you can actually go and do

  • 33:06

    STUART SHERMAN [continued]: the anthropology, which in marketing parlance,we would say focus groups.The challenge with focus groups, though,is that you've chosen people usingthe logic that Prince Charles and Ozzie are the same person.And what this is doing is it's actuallyletting you subsegment the people into the way they think.So now you can say, are there a sheep dog or a terrier?

  • 33:29

    STUART SHERMAN [continued]: And that makes a big difference because youdon't motivate a sheep dog the same wayyou motivate a terrier.And this I described as the persona problem,because in marketing today, whichwas the same as in marketing 50 years ago,we made large assumptions about the groups of people.And they were based on theoretical thoughts

  • 33:49

    STUART SHERMAN [continued]: that we had and impressions.And I hope I proved to you that your impressions aren't so goodand that you focus on things, you don't reallynecessarily get it all because you miss stuffat the edges of your vision and you're easily fooled.And the data really makes a difference.So the question really is, do we have a bright future with AI?

  • 34:14

    STUART SHERMAN [continued]: Is this a solution, or is this a problem?And I would say it's really all how we use the tools.If it's used deceitfully, then no.It's a problem.If it's used ethically, then yes.Ultimately, it's going to come down to how we use it.

  • 34:35

    STUART SHERMAN [continued]: It's truly up to us.So the challenge that we have really in the next 20 years--maybe 10, maybe 5, I don't really know--is really going to be about how corporations start using AIand how consumers react to it.And that's really choices that we'll make.Thank you.[APPLAUSE]

  • 34:56

    STUART SHERMAN [continued]: [MUSIC PLAYING]

Abstract

Stuart Sherman, CEO of IMC, discusses the use of artificial intelligence, and its ethical implications, to understand human behavior and influence retail marketing.

Looks like you do not have access to this content.

AI & Marketing: Understanding Humans Better

Stuart Sherman, CEO of IMC, discusses the use of artificial intelligence, and its ethical implications, to understand human behavior and influence retail marketing.

Copy and paste the following HTML into your website