RAGHU IYENGAR: So in this part, we'llstart talking about descriptive data collection.So descriptive data collection, as you recall,basically talks about trying to understand, for instance, youknow, who are our customers?What is the share of wallet?And all those kinds of questions whereyou have to have hard numbers.So how can you do this?This can possibly be done in two ways.
RAGHU IYENGAR [continued]: One is active data collection and one is upto unobtrusive data collection.In the active data collection, you can start thinking about itin two ways, broadly, of thinking about data collection.One is surveys, which is the mainstay of market researchfor many, many companies.The other one are self reports coming from your customers.We'll talk about both of them.So let's start with surveys.
RAGHU IYENGAR [continued]: Surveys, pretty much used by every Fortune 500 company.Regularly used for gathering customer attitudes.You can think about sentiments.You can think about purchase habits.Many different things actively gathered by service.And data, of course, can be help to segment customers,start thinking about who our customers are,start thinking about who do they buy from.
RAGHU IYENGAR [continued]: All of those questions that you need to understandto set your marketing strategy.Now, there are many, many companies out there thatcan help you do the surveys.I'll give you some examples.Qualtrics is a very famous companythat helps you conduct free surveys sometimes.Another example is Survey Monkey.Now both of these companies not onlyhelp you do sometimes free surveys,
RAGHU IYENGAR [continued]: but they can also be full service companies.So for instance, if you look at the pricing plan of oneof these companies, they allow you,and they price you differently, basedupon whether you would like them to find your customers,you would like them to set up the survey, analyze the data,and give you that data.So in that sense, when you start thinking about doing surveys,there are already companies out there
RAGHU IYENGAR [continued]: that can help you reach out to customers, collect that data,and analyze the data.Of course, surveys are not the only way in which youget information from customers.You can actually ask customers directlyto self report some of those surveys.So if you look at, for example, mobile surveys.That's the next frontier.
RAGHU IYENGAR [continued]: And these are basically companiesthat are giving these surveys on mobile devices.Again, some common examples, Qualtricsis one company that does both.It does surveys on your desktop.It also does surveys on mobile devices.Another company, for example, is Mixpanel.Again, what's the idea here?The idea here is you want to basicallysend surveys to customers in the moment of purchase sometimes.
RAGHU IYENGAR [continued]: So what do mobile surveys allow you to do?.They allow you to capture customers' reactionsin-situ rather than being retrospective.For example, you can actually send a surveyto a mobile device of a customer at the timethat they're making a purchase decision rather than one monthlater.So clearly the kind of sentiment, the feelingsthat customers might have at the time of purchasemight be better captured rather than making
RAGHU IYENGAR [continued]: them think about that purchase one month later.The questionnaire can be tailoredbased on location and context.So again, looking at where the mobile device is.In other words, it tells you where the customer is.If the customer is in a mall, youcan ask questions about what they're doing in that mall.If the customer is at a restaurant,you can ask questions tailor based on that.So very tailored service can be done.
RAGHU IYENGAR [continued]: But what's the caveat?You don't want to overdo it.Marketers should be very careful, rather than kind of,again, using this leverage of making a tailored survey,you don't want to keep sending surveys again and againto the same customer.You quite often see a huge amountof survey fatigue setting in.So it's important to, again, use this power of mobile surveys,
RAGHU IYENGAR [continued]: but up to a limit.So now that we've talked about different ways of conductingsurveys, either mobile surveys and so on,let's go into more depth.What are the kind of questions you can ask using surveys?And what are some dos and don'ts?Now before implementing the survey,there are two big issues that come forth.What are the different kinds of questions?And, how do you validate a survey?
RAGHU IYENGAR [continued]: In other words, is it worth collectingwhat you're collecting?Let's go over the first issue.What are the different kinds of questions?Now, this is, in some sense, a listof different kinds of questions.Of course, there are many kinds of questions out there.But these are the important ones.So what I'm going to do in the next few slidesis to go over each type of questions,look at what are the positives and negatives,
RAGHU IYENGAR [continued]: in some sense the pros and cons, and then kindof talk about best practices.So let's start with the first one, itemized category.Here's one example of this.How satisfied are you with your health insurance plan?It could be different buckets.In this case, you have five buckets, from very satisfiedto very dissatisfied.Notice the extent of category descriptions are quite clear.
RAGHU IYENGAR [continued]: And there is a balance of favorable and unfavorablecategories.What does that mean?There's a middle point, neither satisfied or dissatisfied.On the other hand, you have two categorieswhich are above that, quite satisfied and very satisfied.And two categories below that, quite dissatisfied and verydissatisfied.So on the surface of it, I think this is a very good way
RAGHU IYENGAR [continued]: of asking a question.But what are some cons with this?Well, one big con is, compared to what?Of course, if the person who is answering this questiondoes not have health insurance, clearly this questionis not relevant.It could be possible that the question is being answeredby a person who's thinking about insurance that he had before.
RAGHU IYENGAR [continued]: If that's the reference point, the answerthat we might be getting would bequite different across people.In other words, the problem with thisis you don't know what people are comparing it against.So that's one issue with the itemized category.Let's look at another one.which tries to address this issue.Here which would you call is a comparative question.
RAGHU IYENGAR [continued]: You directly ask, compared to private clinics in this area,the doctors in private practice providea quality of medical care which is,very inferior to very superior.So what have you done here?You've tried to address the problem with the previous setof questions, which is we are explicitly tellingpeople what to compare against.But what's the problem here?
RAGHU IYENGAR [continued]: What are the big loss of information?The big loss of information is both alternativesmight not be that great.You're comparing two alternatives whichmight be both below the bar.But one is better than the other.So what do we see here?Depending upon the type of question you can ask,there might be some loss of informationthat always happens.So what I want to do, again, in the next few slides is
RAGHU IYENGAR [continued]: to show you different kinds of waysyou can ask questions, which tries to getat the heart of the problems.Here's another one.It's called ranking questions.An example of this would be the following, pleaserank the following characteristic of, let's say,cell phone service in terms of their importance.One through eight.There are eight categories given.One here is the most important, eight is the least important.
RAGHU IYENGAR [continued]: And typically when you're asking these questions,no ties allowed.What that means is, only one of these thingscould be the most important and only one of themis the least important, and so on.So what do we see here?First of all, the type of categories are quite clear.But what it also involves is a lot of comparisons.In other words, people would have to do a lot of comparisons
RAGHU IYENGAR [continued]: when they're going across all of this.So if you look at it, for the first one, for the first rank,you're comparing across eight different categoriesand giving one of those categories as the top rank.Let's say reception clarity for you is the most important one.So you give it number one.Then, when you're going through giving rank number two,you again have to do seven comparisons.
RAGHU IYENGAR [continued]: It's a lot of different comparisons.What that means is, typically, such type of datamight not be very beneficial to collectif you have a lot of categories that youwant people to compare.What might end up happening is that peoplegive the rank one, two, and three perhaps thinking a lot.And after that it might be too many comparisons
RAGHU IYENGAR [continued]: for people to make.So the typical rule of thumb, or the best practice here,is that you don't give too many categories.Maybe eight might actually be real quite a lot.So maybe six to eight might be a good numberof categories to give.If you give more than that, you might get good quality dataonly for the top one or two.And after that it might not be lot of distinguishing data.
RAGHU IYENGAR [continued]: Another example of this is somethingcalled a Paired Comparison.In fact, if you think about it, whichwe'll cover later on in the sessions,something called conjoined.If you think about it, you will seethat this type of comparison datais coming actually from a conjoined type of survey.Which of the following two products do you prefer?On the left hand side, you have Honda Accord, price of $18,000,
RAGHU IYENGAR [continued]: automatic transmission, and a luxury package.On the right hand side you have Toyota Tercel, $16,000,manual transmission, standard package.What are we trying to do here?What we're trying to do here is forcing peopleto compare among the two objects.And this way, by looking at what they chooseand what they don't choose, you try and understandwhat is it that people care about when they're choosing
RAGHU IYENGAR [continued]: among these two products.This looks like a great way of understanding what people like.Why?Because this actually mimics whatpeople probably do in the real world.Imagine you want a laptop.What do you typically do?You go down, let's say, to Best Buy or any other store,or maybe on Amazon, wherever is your preferred provider.You go and start comparing different laptops.
RAGHU IYENGAR [continued]: You choose the different things that youwant to compare things on.So, for example, for a laptop it might be the screen size.It might be how heavy it is.What's the CPU and so on.So this really mimics what people reallydo in, perhaps, real life.But what are some issues with this?Issue are the following.Again, if you have, let's say, two comparisons here,two products Honda Accord Toyota Tercel.
RAGHU IYENGAR [continued]: People might prefer Honda Accord to Tercel,but might actually hate both.Let's say you're given another option.It might be the case that amongst these two Honda Accordis preferable, but it's still below the barin terms of what they like.Another problem, of course, is large number of brands cannotbe compared.Imagine yourself competing among six or seven different brands
RAGHU IYENGAR [continued]: with lots of different kinds of features.Very, very difficult for you to make that decision.Why?Because, again, there'll be a lot of comparisons.What's the best practice here?Typically have about two to three brandsso that way you can get good data in termsof how people are comparing across different brands,and also you don't want too many features or too
RAGHU IYENGAR [continued]: many attributes of these brands.Typically, about six is a good number to have.In this example, we have four, whichis the brand name, the price, what kind of transmissionis it, and what kind of package is it.So about four to six features per brand,and about two to three brands in terms of comparison.Anything more than that I think itwill become very difficult for respondents
RAGHU IYENGAR [continued]: to clearly understand what the differences areand give you a good reliable data.The next one is the one which is the most common form.What's the story here?The idea here is you have many statements ,typically on the horizontal, which is here you see.For the first one might be I buy many things with a credit cardand so on.
RAGHU IYENGAR [continued]: And in each row you answer whether you agree or disagreewith these kinds of statements.So this gives you an ability to collect a lot of datain terms of what people like and what people don't like.This is called the Likert Scale.It's the most common form of questioningand is used very frequently when youwant people to think about lots of different statements.
RAGHU IYENGAR [continued]: In this case, about credit cards and related ideas.Here's another example.Something called the continuous scale.And the idea here is, if you have, for example, some thingsthat you want to show people and you want whatis called in-situ preferences.What that means is, you want preferencesas they're thinking about or looking
RAGHU IYENGAR [continued]: at a particular, let's say, video or a movie clipand so on.What do people do?Typically it's a bar.In some sense it can be done on the internet very easily,very popular with computer mediated service.You can have a bar, a mouse click, and so on, and peopleas they're watching a video or as they'rewatching an advertisement and so on, theycan move this bar between, do they like this
RAGHU IYENGAR [continued]: or do they not like this.So this is very popular, especiallyin computer mediated surveys.When you want information on how people are lookingat your product, and how that preferencechanges as they're going through the different products.So if they're looking at a particular video,let's say you're an ad provider and youwant to see how people's preference for that adchanges as they are viewing the ad.
RAGHU IYENGAR [continued]: So while they're viewing this, youcan keep changing that counter.Many of us might see this more recently in election polls.So what happens during elections is when they're having debatesand so on, they typically have an audience, whichhas this meter which can go back and forth.So as they're going through an argument,you can kind of see how people are preferent towards oneor the other candidate.
RAGHU IYENGAR [continued]: So this is what is called as a continuous scale.Now what I wanted to do here was basicallygive you a broad overview of the different kinds of questions.Notice these are not an exhaustive set.There are many other kinds of questions that are out there.But what I want you to take away from all of thisis that each type of question that you ask,whether it's a rating scale, is it a comparative scale,
RAGHU IYENGAR [continued]: a Likert Scale, and so on, each one of those questionshas some pros and cons.So thinking carefully about what kind of question to askdepends upon what the end goal is.That brings me to issue number two.What is the end goal here?The end goal can be of two forms.One is called validity and one is called reliability.
RAGHU IYENGAR [continued]: In other words, is what you're collecting goingto be worth anything at all?So let's take that issue.Validity, basically the idea is of predictive validity.So for instance, let's say you're asking a Net PromoterScore, which is something that I'll talk aboutin the next slide itself, Net Promoter Scoretypically is one where you measure customer satisfaction.
RAGHU IYENGAR [continued]: And you're trying to see whether people are going to referyour product to other people.Now what you would like to hope for is that Net PromoterScore typically predicts, for example, customer profits, firmprofits, or other kinds of dependent variablesthat you might be interested in.If it does, then what you would sayis that that particular survey, Net Promoter Score,
RAGHU IYENGAR [continued]: has good predictive validity.What that predictive validity meansis that it's worthwhile collecting that surveydata because it predicts a particular typeof dependent variable that you as a firm are interested in.It could be profits, stock prices,other kinds of behavior.Another way to look at how good surveys areis using the reliability.
RAGHU IYENGAR [continued]: One specific form of reliability iscalled test-retest reliability.What that means is very simple.It basically says how stable is it that what you're collecting,if you were to re-measure, for example, customer satisfactionand so on, does it vary a lot?If it does, from one time you measure itto the next time, what it tells you is perhapsit's not a very stable measure.
RAGHU IYENGAR [continued]: There is a lot of volatility.I believe what you'd like to do isto have a measure, or a scale, or a way of measuringthings, which are reasonably stableso that you take comfort in the fact that once you measure it,it's not going to change very volatily.So those are two ways in which youcan measure how good a survey is, validity and reliability.
RAGHU IYENGAR [continued]: So in the next few slides, we're going to take a concreteexample of a survey, the Net Promoter Score,see what are the do's and don'ts of survey design, as well.But before I do that, I just wantto summarize what are the pros and cons of surveys.Let's first look at the pros.Low cost, relatively easy to implement,good way to learn about potential customers.
RAGHU IYENGAR [continued]: On the cons?Not very easy to write a survey that's non biased.And we'll bring that up when we'lltalk about the best practices.The second big issue is, how do you get the right respondents?Who are the people that should answer the survey?Again, something I'll bring up and we talk aboutthe do's and don'ts.And what about products that require some use?So their surveys might not be the best way to do.
RAGHU IYENGAR [continued]: Why?Because these are products that actuallyrequire customers to use the product.In that case, what do people do?Typically you might do, for example, a focus groupwhere you ask people to first look at the prototype,touch the product, feel the product, use the product,and then perhaps do a survey after that.So these are the big pros and cons of surveys.When you think about implementing some surveys
RAGHU IYENGAR [continued]: keep these in mind.
Series Name: Customer Analytics
Publication Year: 2016
Keywords: categorical data analysis; comparison; consumer dissatisfaction; consumer preferences; consumer satisfaction; cost effectiveness; customer satisfaction research; data collection; in situ; likert scales; market research; marketing strategy; prediction (methodology); qualitative data collection; question formation; reliability (research methods); research design; Scales of measurement; Survey research; Surveys and information bias; validity and reliability ... Show More
Segment Num.: 1
Raghu Iyengar, Professor of Marketing at Wharton University, discusses active data collection using surveys, including types of questions and establishing predictive validity and reliability.
Looks like you do not have access to this content.
Raghu Iyengar, Professor of Marketing at Wharton University, discusses active data collection using surveys, including types of questions and establishing predictive validity and reliability.