Skip to main content
SAGE
Search form
PDF
  • 00:04

    ANNOUNCER: Understanding research.

  • 00:23

    SPEAKER 1: This critique is due tomorrow,I have to do it today.I'm going to get it done.I'm going to do this.I'm starving.I wonder what's in the fridge.

  • 00:42

    SPEAKER 2: --Information that is known.It is imperative to be able to read and understandscientific articles.

  • 00:50

    SPEAKER 1: Oh!

  • 00:51

    SPEAKER 2: Join me in this TV special,as I discuss the major componentsof a scientific paper, and how to evaluate the qualityof the research presented.

  • 01:02

    SPEAKER 1: This looks really good.

  • 01:04

    SPEAKER 2: The ability to share scientific discoveries isthe basis of the accumulation and advancement of knowledge.A good scientific study builds onand furthers what is already known about a particular topic.Research papers are divided into seven sections,and include the abstract, introduction, methods, results,

  • 01:33

    SPEAKER 2 [continued]: discussion, acknowledgments, and literature cited.In each section, we will discuss key terms used,and what you should look for to appropriately evaluatethe quality of the information presented.Abstract-- the very first section of a paper

  • 01:54

    SPEAKER 2 [continued]: is the abstract.It is a short summary, usually about 250 words,and is designed to summarize the researchand get people interested in reading the paper.The abstract should include-- what is being studied and why--in order to provide scientific justification--a brief description of the procedures used,

  • 02:17

    SPEAKER 2 [continued]: a brief explanation of the major results,and the significance of the results in a larger context.Introduction-- the second section, the introduction,provides background information on which the research is based.

  • 02:37

    HAROLD TAKOOSHIAN: The introduction shouldbe fairly short and it has a few essential elements.[Harold Takooshian, PhD, Psychology Department,Fordham University] One is that it'dstate the problem being tested as a question,and then the tentacle hypothesis-- an answerthat the researcher offers before doing the study.It should also contain the literatureof at least a few studies that havebeen done on the topic in the past,

  • 02:58

    HAROLD TAKOOSHIAN [continued]: so the reader can understand the background of the problem.And then it should elucidate why the research is important.In behavioral research, very often, there'sa social significance that shouldbe stated in the introduction.

  • 03:16

    SPEAKER 2: A hypothesis suggests a causal relationshipor an explanation for a certain phenomenon.[Hypothesis- An educated guess attemptingto explain a certain phenomenon or causal relationship]A research study attempts to answer whether or nota hypothesis is true.For example, a researcher may hypothesizethat increasing caloric intake before an examimproves a student's performance on the exam.

  • 03:38

    STUDENT: Yes!

  • 03:38

    SPEAKER 2: The study will be designed to appropriately testwhether caloric intake before an examincreases academic performance.

  • 03:47

    SPEAKER 1: Oh, now I get it.

  • 03:51

    SPEAKER 2: Methods.The next section in a research paperis the methods section, which describes the exact proceduresthat were performed in the experiment,so that someone reading this section could easily repeat it.

  • 04:08

    HAROLD TAKOOSHIAN: The methods section in APA publicationshas to contain three essential components--the participants who participatedin the study, the materials, what items were used--apparatus or questionnaires-- and thenthe procedure-- how the materials wereapplied to the participants.

  • 04:27

    SPEAKER 2: For the purposes of this special,we will group research studies into threedifferent categories-- observationalstudies, true experiments, and quasi-experiments.

  • 04:41

    SPEAKER 1: There are three types of studies?How do I know which one this is?

  • 04:45

    SPEAKER 2: Observational studies usuallyinvolve the observation of ongoing behavior.These studies are designed to describe behavior in orderto generate hypotheses for future study,or to try to find patterns of relationshipsamong the behaviors being observed.They do not really attempt to determine

  • 05:05

    SPEAKER 2 [continued]: the cause of these patterns, but merely that they exist.

  • 05:09

    KATHLEEN M. SCHIAFFINO: Observational studiesare ones in which the individual is notsupposed to influence anything thatis going on in the usual setting.[Kathleen M. Schiaffino, PhD, Chair, PsychologyDepartment, Fordham University] Typically,there are two kinds of observational studies--one is called "naturalistic," in which thereis no way that the observer is involved at all.And the easiest example would be if somebody was collecting

  • 05:32

    KATHLEEN M. SCHIAFFINO [continued]: data about plant growth.The second kind would be participatory observation,and that would be in circumstanceswhere the person can't be totally invisible.So then you usually have somebody actuallybe a participant in the situationwhile collecting data.A really typical example of that kind of study

  • 05:54

    KATHLEEN M. SCHIAFFINO [continued]: would be in a daycare setting, where perhaps youwant to observe aggressive behavior in childrenon the playground, and you have the daycare teachers collectthe material.If you had a stranger sitting in the playground,then the children would know something was going onand they might be on their best behavior.If the teachers are there, it's ordinary,

  • 06:15

    KATHLEEN M. SCHIAFFINO [continued]: and they won't take any notice, and what you getis the usual behavior that happens.

  • 06:21

    SPEAKER 1: Definitely not an observational study.What's next?

  • 06:25

    SPEAKER 2: In addition to observational studies,there are also true experiments.

  • 06:30

    HAROLD TAKOOSHIAN: A true experimentis one that's done in a highly controlled environment,let's call it a lab-- but it couldbe in an office, or a classroom-- a highly controlledenvironment. [Harold Takooshian, PhD, Psychology Department,Fordham University] And the true experimentwould have an experimental and a control group,so that we can compare the behavior of the twoafter the treatment is heard.The independent variable is what the researcher manipulates

  • 06:52

    HAROLD TAKOOSHIAN [continued]: in a true experiment, and the dependent variableis the expected outcome-- the behavior thatresult from the manipulation.

  • 06:59

    SPEAKER 2: One example is a researcherwants to examine whether energy drink consumption causesan increase in academic performance.In this case, "energy drink" is the independent variable.

  • 07:11

    KATHLEEN M. SCHIAFFINO: In an experimental study,what you need to have is an experimental group,and that's the one where the independent variable ismanipulated-- something is done to that groupand then a behavior will be observed.[Kathleen M. Schiaffino, PhD, Chair, Psychology Department,Fordham University] You also needto have a control group, which is a group that'sthe same in every other way, except that they don't

  • 07:34

    KATHLEEN M. SCHIAFFINO [continued]: get that independent variable, theydon't get the manipulation.It's possible to have more than one treatment group.My control group would get no energy drink,one of my experimental groups wouldget just one of the drinks, and thenI can have another experimental group thathas two or three of the drinks, and wecan see if the amount of the independent variable

  • 07:55

    KATHLEEN M. SCHIAFFINO [continued]: makes a difference.

  • 07:57

    SPEAKER 2: In assessing the effectof the independent variable on the dependent variable,it is important to ensure that the only differencebetween treatment and control groupsis the independent variable.

  • 08:09

    KATHLEEN M. SCHIAFFINO: It's important to minimizethe difference between the subjects in the control group,and the subjects in the experimental group,and the most effective way to do that is using something called"random assignment." [Random Assignment-Each participant has an equal chance of being assignedto treatment or control groups.]

  • 08:29

    KATHLEEN M. SCHIAFFINO [continued]: Random assignment refers to having a group of subjectsavailable to you, and then using a toss of a diceor a computer-generated list of numbersto randomly assign people to either the experimental groupor the control group.Random selection is based on the ideathat I have a particular, full population available to me,

  • 08:53

    KATHLEEN M. SCHIAFFINO [continued]: like all of the freshman at a college,and I have a system to randomly selectpeople who are going to participate in my study.

  • 09:05

    SPEAKER 1: I'm pretty sure it was a true experiment,but I don't think there was random assignment.

  • 09:11

    SPEAKER 2: In addition to observational and trueexperiments, there are quasi-experiments.Like true experiments, quasi-experimentstest for relationships between variables,but they differ from true experimentsbecause subjects may not be randomly assigneddue ethical or practical limitations.

  • 09:33

    HAROLD TAKOOSHIAN: For example, research on desegregation,where you had one school was fully desegregated,the other was partially desegregated,this would be a quasi-experiment because thereis a manipulation-- we're lookingat an independent and dependent variable--but the true experiment is not there,there's not total control of the environment.

  • 09:52

    SPEAKER 2: Quasi-experiments cannot make true causalstatements, but only serve to establish more closelypredictor relationships by identifying consistentcorrelational patterns.Well, how do I know if the design of the studiesis any good?

  • 10:09

    SPEAKER 2: When evaluating the qualityof the design of a study, there are several thingsto look for which may be applied to allthree types of research-- were the measurementsreliable and valid?How large was the sample size?How were the subjects selected?What did the experiment control for?

  • 10:30

    SPEAKER 2 [continued]: [Factors for Evaluating a Study- Reliable and ValidMeasurements, Sample Size, How Subjects Were Selected, Whatthe Experiment Controlled for]

  • 10:34

    HAROLD TAKOOSHIAN: Reliability is whether the experimentis internally consistent.For example, if we did the same experimentover and over again, would we get the same results?[Reliability- Refers to the stability of the measurement]A shorthand definition of reliabilityis whether the method correlates with itself.That is, whether we're finding somethingthat has meaning, as opposed to validity,which is whether what we're finding correlates

  • 10:54

    HAROLD TAKOOSHIAN [continued]: with real-world behavior.

  • 10:57

    KATHLEEN M. SCHIAFFINO: The most important thingabout reliability in observational studyis the reality that you typically have two or threepeople doing observations, and wehave to have agreement between those people.That agreement is called "inter-rater reliability."For example, if I was doing an observation of children

  • 11:18

    KATHLEEN M. SCHIAFFINO [continued]: in a daycare center and I was measuring aggression,before I even started, I would needto make sure that we both had a common understanding of whatwe were going to call "aggression--"a touch, a push, a slap.Once we have that agreement, then wecan have some confidence that we're bothgoing to be seeing the same thing,and we're going to have high inter-rater reliability.

  • 11:38

    SPEAKER 2: Validity refers to whether or notthe test actually measures the characteristic that itis intended to measure. [Validity- Refersto whether or not the test measures whatit is intended to measure] It also refers to one's confidencethat the findings of a particular experimentrepresent the truth of the situation.There are two main types of validitythat can be evaluated when examining the methods

  • 11:59

    SPEAKER 2 [continued]: section of a research paper-- internal validityand construct validity.[Internal Validity- The degree to whichthe design of the experiment allowsthe questions to be tested by eliminating outside influences]

  • 12:15

    HAROLD TAKOOSHIAN: The internal validityis how real the experiment is in the laboratory,whether the participant believes what's happening and is trulya part of it.

  • 12:24

    KATHLEEN M. SCHIAFFINO: Suppose we were doing a studyand wanted to look at the effect of a drug on anxiety.If, for our experimental group, we gave them the drugand then they attended some kind of rock concert,there would be changes in their behaviorthat might be because of the drugand might be because of the rock concert.

  • 12:45

    KATHLEEN M. SCHIAFFINO [continued]: If our control group wasn't testedunder the same circumstances-- if they didn't attend the rockgroup-- then there'd be no way to know which of the two thingscaused the changes, and we would nothave good internal validity.

  • 12:59

    SPEAKER 2: Construct validity describeshow well the test measures what it is believed to be measuring.[Construct Validity- How well the test measureswhat it is believed to be measuring]For example, if a researcher is measuring intelligenceby counting the number of bumps on a person's head,the measurement may be reliable-- youwill get the same number of bumps every time--

  • 13:20

    SPEAKER 2 [continued]: but it may not necessarily be an accurate measurementof intelligence.Another thing to consider when examiningthe quality of the study is the sample size.In general, the larger the sample size,the more reliable and less likelyconclusions are drawn due to chance, error, or extenuating

  • 13:41

    SPEAKER 2 [continued]: circumstance.If a study attempts to describe mating behaviorsof vampire bats, but only observed a few batsfrom a particular community, then those observationsare not likely to be representativeof the whole community.It is also important to notice how the subjects in the study

  • 14:02

    SPEAKER 2 [continued]: were selected and assigned.

  • 14:04

    KATHLEEN M. SCHIAFFINO: Experiments oftenhave both true experimental componentsand quasi-experimental components.One example of a quasi-experimental aspectof a study would be if I was looking, for example,at the differences in male and female performanceon some kind of a video game.Male and female-- I can't assign somebody to be male or female,

  • 14:27

    KATHLEEN M. SCHIAFFINO [continued]: however, I can make it more of an experimental designif, within the males and within the females,I then randomly assign one kind of video gameor a more typical kind of video game.Then I'd have an experimental component.

  • 14:43

    SPEAKER 2: Results-- the results sectionfollows the methods section of the paper,and includes the data that was collected and analyzedin a study.

  • 14:57

    HAROLD TAKOOSHIAN: The results section should be briefand it should be concise, not wordy.Perhaps tables, graphs, charts express things much betterthan the narrative word.But the idea of the results section is simply to show,quantitatively, whether our hypothesis was correct or not.Just enough information to tell us whether we accept or reject

  • 15:20

    HAROLD TAKOOSHIAN [continued]: the hypothesis.

  • 15:22

    SPEAKER 2: A result is significantwhen it is most likely not due to random chance.[Significant Result- The outcome is most likely notdue to random chance] There are several different statisticaltests that experimenters may use to determineif their results are significant.A common way is through the use of P-values.[P Value- The probability that differences

  • 15:43

    SPEAKER 2 [continued]: between control and experimental groups are due to chance]

  • 15:53

    KATHLEEN M. SCHIAFFINO: P-value refers to the probabilitythat the results that you got could happen by chance alone.And in most social science researchit's agreed upon that a probability of 0.05, whichmeans a less than 5% chance that what you see in the datacould have happened by chance alone-- that is generally

  • 16:15

    KATHLEEN M. SCHIAFFINO [continued]: considered to be sufficient proof to say that what happenedis because of the study and not a coincidence.

  • 16:23

    HAROLD TAKOOSHIAN: If I were undergoing cancer therapy,I would want to know that the medicine being used on mewas significant in probability-- less than 0.05--that if I am given this medicine, there's a 95% chancethat I'll be cured.

  • 16:41

    SPEAKER 2: Discussion-- the discussion follows the resultssection of the paper.

  • 16:47

    KATHLEEN M. SCHIAFFINO: At the end of the manuscriptyou'll have a discussion section.And in the discussion section, what they're trying to dois to summarize, in a more global way, the meaningof the results that they found.They'll try to connect that information backto theories and findings from previous research--typically, having been cited in the lit. review

  • 17:09

    KATHLEEN M. SCHIAFFINO [continued]: at the beginning of the study.They'll make an effort to note consistenciesor inconsistencies between what they foundand what other people have found.Authors will also use that opportunityto talk about the weaknesses of their study,and also to perhaps make some suggestions for wherefuture research might go as a result of what they found.

  • 17:31

    SPEAKER 2: In assessing the quality of the study,it is important to keep in mind external validity.

  • 17:37

    HAROLD TAKOOSHIAN: External validityis how much that experiment really generalizesto the outside world whether whathappens in the laboratory has meaning in the real world.[External Validity- The degree to whichfindings can be extrapolated to an outside audience]

  • 17:46

    SPEAKER 2: For example, if a study claimsthat a particular drug inhibits tumor growth in humans,but only tested on rats, this conclusionmay not be externally valid.It would be acceptable to say that because the drug haltedtumor growth in rats, it may inhibit tumor growth in humans.

  • 18:07

    SPEAKER 2 [continued]: When evaluating the quality of information in a discussion,here are some important things to look for--does this discussion explain the results,or just simply reiterate them?A good discussion explains the results.Does the author spend adequate timediscussing the implications of all results,

  • 18:29

    SPEAKER 2 [continued]: or does he or she just focus on a certain aspectof the results?Are the implications from the studythat the author is suggesting appropriate?Is the author drawing conclusionsthat are too general from what the study currently examined?

  • 18:49

    SPEAKER 2 [continued]: Acknowledgments-- another major section of a research paperis the acknowledgements.Here the authors thank those peoplewho may have helped them, or providedfinancial or academic support throughout the courseof the study.

  • 19:10

    SPEAKER 2 [continued]: The final part of a research paperis the literature cited section.It provides a comprehensive list of all the articlesand materials that were referenced in the study.This allows readers to be able to identify whereto find certain articles on studies thatrelate to the current study.In the body of the paper, authors

  • 19:31

    SPEAKER 2 [continued]: use in-text citations to reference past studies,and then provide the complete referenceat the end of the paper.For journal articles, which are the most common paper citedin a research paper, a complete reference usuallyincludes the authors, the title of the article,

  • 19:52

    SPEAKER 2 [continued]: the journal title, the volume of the journal the article was in,the page numbers where it can be found,and the year it was published.

  • 20:04

    SPEAKER 3: Hey, it's me again!

  • 20:07

    SPEAKER 1: Hey, how are you?

  • 20:10

    SPEAKER 3: It's been good.Oh, no!

  • 20:19

    SPEAKER 1: No!

  • 20:22

    SPEAKER 3: Sorry!Sorry, I tried to catch.Can you remember what you wrote?

  • 20:33

    SPEAKER 1: Maybe.

  • 20:36

    SPEAKER 3: If you remember what was down here--

  • 20:38

    SPEAKER 1: Do you want to write it down?

  • 20:40

    SPEAKER 3: Yeah!

  • 20:48

    SPEAKER 1: We'll write it down.So I'm just going to spit it out.

  • 20:56

    SPEAKER 3: Just say everything you can remember.First thing--

  • 21:02

    SPEAKER 1: So the video divided everything upinto different parts of the research paper.

  • 21:05

    SPEAKER 3: Whoa, slow down!Slow and steady wins the race.

  • 21:11

    SPEAKER 1: First is the abstract,which should be concise and summarize the research.And next comes the introduction, which gives perspectiveon the study, and explains why what is being studiedis important.Oh, and I have to have a hypothesis,or problem being addressed.Then after the introduction comes the methods section,

  • 21:33

    SPEAKER 1 [continued]: and this one is long.The methods section has a detailed descriptionof the procedure, so that the experimentcan be easily repeated.

  • 21:42

    SPEAKER 3: You're going to have to slow down just a little bit.

  • 21:46

    SPEAKER 1: I'll slow down.To evaluate the methods section, Ihave to look for several things--how large the sample size is, if the measurements arevalid and reliable, if the subjects were randomly selectedand assigned, and if the experiment controlledfor possible external influences.

  • 22:09

    SPEAKER 1 [continued]: Then, after the methods section comes the results section.

  • 22:13

    SPEAKER 3: Results-- good, good.

  • 22:16

    SPEAKER 1: So in the results section,the author presents the data that they found,and they usually perform statistical calculationswith a P-value.And if the P-value is less than 0.05,than the results are significant.And in evaluating the results section,I have to pay attention that the researchers performedthe appropriate statistical tests.

  • 22:38

    SPEAKER 1 [continued]: And then finally is the discussion section.And the discussion section should explain, notreiterate, the results.And here I have to pay attention that the authors have presentedall the data fairly, not just the ones thatsupport the hypothesis.And I have to make sure that the authors have not

  • 22:58

    SPEAKER 1 [continued]: drawn too general conclusions basedon the results of the study.And then there is the acknowledgements,and that's it!And I remembered everything.Oh, my gosh!

  • 23:09

    SPEAKER 3: You are some kind of information machine!What's wrong with you?That was amazing!I probably got about 15% of that,but you seemed to know it.

  • 23:19

    SPEAKER 1: I should get started, I have six hours.You take a nap.

  • 23:23

    SPEAKER 3: Yes!Yes, I'll take a nap and support you in my dreams.You're some sort of genius, it's clear.

Video Info

Publisher: SAGE Publications, Inc

Publication Year: 2008

Video Type:Tutorial

Methods: Evaluation

Keywords: challenges, issues, and controversies; comparison; practices, strategies, and tools

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:

Keywords:

Abstract

A student preparing to critique a research paper happens on a TV show that explains the different sections of research papers. The show draws on experts and examples to help the student understand what she should look for to evaluate each section of the paper.

Video Info

Publication Info

Publisher:
SAGE Publications, Inc
Publication Year:
2008
Product:
SAGE Research Methods Video
Publication Place:
Thousand Oaks, USA
ISBN:
9781483396750
DOI
https://dx.doi.org/10.4135/9781483396750
Copyright Statement:
(C) SAGE Publications, Inc.,2008

People

Interviewee:
Juliana Daniil
Academic:
Harold Takooshian
Academic:
Kathleen Schiaffino
Narrator:
Jerry Goralnick
Interviewee:
Chloe Phillips

Segment Info

Title:

Segment Num: 1

Keywords:

Segment Start Time:

Segment End Time:

People

Things Discussed

Organizations Discussed:

Events Discussed:

Places Discussed:

Persons Discussed:

Methods Map

Evaluation

The systematic determination of the value, validity, or effectiveness of something often in terms of some kind of intervention.
Evaluation
How to Read and Understand a Research Study

A student preparing to critique a research paper happens on a TV show that explains the different sections of research papers. The show draws on experts and examples to help the student understand what she should look for to evaluate each section of the paper.

Copy and paste the following HTML into your website