Skip to main content

Measurement, Tips, and Errors: Making an Instrument Design in Risk Perception

Case
By: Published: 2014 | Product: SAGE Research Methods Cases Part 1
+- LessMore information
Search form
No results
Not Found
Download Case PDF

Abstract

I am currently conducting research for a PhD Public Health. As a result of my previous experience researching my Master's degree thesis, and what I have learned so far about designing and validating a risk-perception instrument, I have gained some insight related to know-how and possible pitfalls that can occur when undertaking research. These range from choosing a theme, designing an instrument, obtaining results, and publishing. Some of the methods I will discuss are literature review, Content Validity Index in the field of validity, and statistics in the field of reliability such as Cronbach's alpha and Pearson's product–moment correlation.

Learning Outcomes
  • Understand the definition of instrument
  • Summarize some tips and errors when developing a new instrument
  • Understanding the importance of the reliability and validity process
  • Choose between making a new instrument or use a ready-made instrument
Error: Go with the Flow in Your Research Theme

I began in the research world with a Master's degree in Public Health. The first day of classes, the teacher asked me to think about a topic for my thesis for the next class. I got chills. Really? I have 2 days to define my research topic for the next 2 years and probably the rest of my academic life?

I decided to combine Psychology with Public Health because the former was the topic of my Bachelor's degree; at least that was clear, but the real topic of the research was not. In order to be accepted into the Master's program, I wrote an essay about a public health problem. I chose adherence to treatment in HIV/AIDS because I was familiar with the topic from previous experience with an HIV charity. It was a very popular topic, so I went with the flow and chose it for my thesis too; at that moment, it seemed the logical thing to do.

The truth is, although it is a very important issue, I was not feeling very connected to it. You need some kind of passion and, maybe it sounds corny, but it is kind of like falling in love: you have to feel sparks.

Jackson H. Brown gave the following advice: ‘Marry the right person. This one decision will determine 90% of your happiness or misery’. I don't know how he arrived at this conclusion, although I can imagine. In research, it is the same, your topic and methodology is going to decide a lot of your happiness and commitment because you are going to be spending a lot of time reading, writing, researching, and thinking about it.

Research something that inspires you, which you enjoy, and that will bring you fulfillment. This does not mean that you are not going to desperately try to escape from your research sometimes; there are times when you have to take some time out, even if you love it!

The first mistake could be to choose a research topic you are not convinced about at all. As you can see, I changed my topic and I decided to describe how I developed an instrument in ‘risk perception of children's burns’ for parents. Fortunately, I did it on time and had no regrets. But skip this mistake and don't decide in 2 days, but begin work really convinced about what you want to do.

Tip: Make a Literature Review of Your Theme

I conducted a literature review of ‘risk perception in children's accidents’, although my theme was children's burns. I discovered, however, that there were very few articles solely dealing with this subject, which required that I widen my scope. It was time-consuming, but in the end, it made a difference because I had the whole picture: the origins, trends, and vanguard discoveries. The essential point is to refine the topic and the gap in the literature which you want to fill with your research.

Review books in addition to journals. I strongly suggest going to the main source. In my case, I had to read the works of Paul Slovic, Roger E. Kasperson, Baruch Fischhoff, Mary Douglas, Daniel Kahneman, and Ulrich Beck, since they are theoreticians and researchers of risk perception. This helped me to decide what theory I wanted to have underpin my project, and the strongest and most fundamental points.

‘Gray’ literature could be a very good source of ideas, which includes pictures, videos, interviews, newspapers, and letters, that can help you to remember all the authors; I found it very easy to get lost with all the names at the beginning. For example, I found on the Internet videos of interviews of Paul Slovic, which helped me remember the topic, facts, who he is, and his role in the field.

First, check what sources you have at hand: books, databases, and newspapers. Then consider if you need to buy some if they are not available. Basically, you must be willing to invest some money and time. My recommendation is to enjoy it and make it worth the effort.

Tip: Make a Literature Review of Instruments

First, let's be clear about the concept of ‘instrument’. Paul Vogt defined it as any means used to measure or otherwise study subjects, and it could be a mechanical device or a written instrument. Therefore, indices, scales, and questionnaires are all measurement instruments. Likewise, it is essential to make a literature review of the design and validation articles for three main purposes:

  • Find an instrument that might be what you are looking for.
  • Realize that if there are no instruments for your objectives, you need to design your own.
  • Acknowledge what the requirements are for publications to get an idea of what journals are expecting from the task that you are about to start.

Even when you are not going to develop an instrument, you need to report the reliability and validity of the one you are using in your research sample. Delbert C. Miller and Neil J. Salkind mention two reasons for designing a new instrument:

  • after a literature search, he or she might not find a scale that fits the problem, or
  • the available scales are poorly constructed.

Any constructed scale should be good enough to invite future researchers to use it in the ongoing process of accumulating research findings. Even if there is an instrument which already exists, be careful; it is not necessarily a good one. That is why it is important to know the process of constructing of an instrument, and this is the way to know how to judge it. When you do find an instrument which fits your needs, this has some implications too. Sometimes you cannot use it without permission and, of course, never without giving credit to the author responsible for it. For my research, I needed to develop an instrument because I could not find any risk-perception instruments relating to children's burns which I could use with Spanish-speaking mothers.

Error: Skip Methodology

Methodology is like a treasure map: you are about to sail into distant and unknown waters. In other words, the methodology of your research is your guide. Without a plan, you are going to start rambling, improvising, and eventually get lost. Do not skip researching the methodology. You need the requirements and procedures for how to implement it depending on the type of study desired, for example, case–controls or cross-sectional studies. Additionally, you should also research the methodology of how to design an instrument.

Having this set out clearly will help you to plan your objectives according to the time you have available. Usually, we have deadlines imposed upon us, the school calendar, a date from our tutor, a congress where we want to present our results, and it depends a lot on your personal situation. The best thing to do is to plan and to make real calculations for your research. Always remember that there are things that can delay your plans unexpectedly, and it is better to allow extra time for possible obstacles.

Tip: Start with the Right Compass and Know Your Objectives

If methodology is the map, objectives are the compass. If you decide to design and validate questions, Kennet Rasinski asserts two indisputable conditions: be absolutely clear about the research objective and word the questions so their meanings are clear and unambiguous. There is a lot to research in one single theme, for example, accidents:

  • in what population do I want to research accidents? I could answer ‘children’.
  • do I want to research children of both genders? I could answer ‘yes’.
  • from what age range do I want my sample of children? I could answer ‘I just want between 1 and 4 years’.
  • do I want to research all kinds of accidents? I could answer ‘no, just burns’.
  • do I want to research burns that happen in a particular location? I could answer ‘burns that happen at home’.
  • do I want to research burns that happen at home when the children are alone? I could answer ‘yes’.
  • do I want to study the frequency of the burns when the children are alone at home? I could answer ‘no, I want to research what the mothers perceive as risk’.

And keep going until you find your topic.

The permutations are infinite. Try to do this exercise with your theme. You are going to see that a lot of questions already have answers, but others do not, or at least not in your context. Once you have chosen the question, ask why it is important to research this topic, whether it is important to develop an instrument in this instance, and what you want to achieve. Make objectives based on the research question and methodology to achieve congruence in your research. It is imperative to do the first steps correctly because you have to delineate the topic in order to focus the instrument. For example:

  • Topic: Mothers' risk perception in children's burns
  • Objective: To explore the perceptions of mothers
  • Type of study: Cross-sectional study
Tip: Good Instruments Come in Fours

Have you heard the expression ‘bad news come in threes’? In the same way, I think you can say ‘good instruments come in fours’: construct, validity, reliability, and design. The topic of the research is going to become the construct. Jennifer Peat defines constructs as underlying factors that cannot be measured directly, with the items being the individual questions in a questionnaire. The items ideally have two characteristics: reliability and validity. John Reinard defines reliability as ‘the internal consistency of a measure’ and the validity as ‘the degree to which a measure actually assesses what is claimed’. There are many different types of reliability, and validity is necessary in order to decide which ones to implement for the instrument.

One of the most widely used tests of reliability is the test–retest model. Roger Sapsford and Victor Jupp argue that the reliability of a test should be established preferably by test–retest methods because although split-half reliability or Cronbach's alpha give an indication of reliability, they are not as strong as test–retest methods since they do not give an assurance that the trait or belief or attitude is stable over time. Delbert C. Miller and Neil J. Salkind also contest that many experts believe the test–retest index is the best measure of reliability. The problem is that if the construct is not stable, you cannot demonstrate reliability.

For example, in my first research of risk perception in children's burns, I applied a test–retest in a group of mothers with a month between applications. The statistic I used was Pearson's product–moment correlation, a measure that indicates the correlation or dependence between two variables. In this case with test–retest, it would be one variable data from the first application of the instrument and the other variable the data from the second application. This statistic shows whether the risk perceptions between the two applications have a relationship and what kind of relationship; the results are expressed between 0, meaning no correlation at all, and 1, meaning maximum correlation. Also, one must consider the direction with a positive or negative direction, depending upon whether the result is expressed in a positive sign or a negative sign. The desired values for test–retest are at least r ≥ 0.7 in a positive direction because the sign is positive, meaning that there is a positive correlation between the two applications. If it is the case, one can conclude that there is stability between answers, and that the sample did not change opinions a lot between both applications.

For example, in the case of the results of one of my instruments for the product–moment of Pearson were very low; some questions indeed reached the desired values of r ≥ 0.7. I searched for a possible explanation and discovered that some mothers suggest that they have to constantly evaluate risk. Although it is perhaps not a very stable concept, some authors have achieved desired values in this way for risk perception. Test–retest does not have a golden rule that determines how much time between applications is appropriate; in a month, a lot of things can happen that can change the mothers' perception of risk, and maybe next time, I can reduce the time between applications. It is not always a bad thing to have low values in test–retest. If we want to observe a change in an intervention, low test–retest values could suggest that the intervention itself or another factor caused a change.

A key point to remember is that you can have reliability without validity, but you cannot have validity without reliability. Be cautious. Simply reading the results of reliability does not mean validation, and sometimes, this mistake happens because we ignore how to judge an instrument. More is required than just the measurement of numbers. You must be sure that those numbers really reflect the reality of what you are trying to measure.

The last element is the design of the instrument, and for practical reasons, I have divided this into two parts: the presentation and the content. The presentation is the way the instrument is going to look, while the content is the way the questions are designed.

Error: Chaotic Variables

Regarding the content of an instrument, in order to have well-defined items, it is necessary to have clear variables and to know whether you are using nominal, numerical, or ordinal variables. This will govern both the way we are going to ask the questions and the statistical analyses which can be applied.

Suppose I ask a nominal question: do you think there is a probability your child could be burned? With a simple ‘yes’ or ‘no’ answer, I can calculate frequency and report percentage. However, in the case of my study, this would be wrong because I am ignoring an important issue of the construct. For example, Baruch Fischhoff defined risk not as a dichotomous (either/or) variable, but one that ranges from small to high.

Therefore, I decided instead to utilize a Likert scale, from 1 being ‘very much disagree’ to 5 being ‘very much agree’. The respondent choosing a number does not necessarily indicate that the variable is numerical because with Likert scales, variables are considered as ordinal, unless they are treated as numerical. Thus, there are a lot of decisions to make here. Watch out for details so that your variables are congruent and are not chaotic.

Error: Making a Bag of Cats

With regard to the construction of an instrument, you can use one dimension or more than one dimension. A dimension can be considered to be a group of items or questions that measure the same construct or the same part of a construct if you divide it in different parts. If you are using multiple dimensions, it is necessary to make clear which items belong in each dimension. To be sure of this, once you apply the instrument you have the opportunity to use Cronbach's alpha, which provides a clearer understanding of the internal consistency of the instrument, thus confirming that the items do actually measure the same construct. In other words, Cronbach's alpha confirms that the items that belong to the dimension measure what they are intended to measure.

Now comes the moment of truth: you are going to define whether your instrument is one-dimensional or multidimensional. For example, a risk-perception instrument relating to children's burns with only one dimension might examine the probability of accidents. But maybe we are interested in designing an instrument with multiple dimensions, which examines, perhaps, the probability of accidents, along with their vulnerability and severity.

Beware: Don't make a bag of cats! A few months ago, I had a conversation with Luis Diaz Muller, PhD, from National Autonomous University of Mexico (UNAM) about a thesis he reviewed, and he told me ‘it was a bag of cats’. I asked, ‘What does that mean?’ He replied, ‘Imagine if you fill a bag with cats. It is going to be a mess! The thesis was a little bit of everything; it didn't make any sense’. This is very useful to keep in mind during the construction of an instrument, meaning that you must be selective in what you include in it. Do not include something just because it seems interesting for your topic; there has to be a reason for it or you will end up with a ‘bag of cats’ instead of an instrument. Sharon E. Robinson and Mary E. Stafford define this as construct-irrelevant variance or items not related to the construct(s) being measured. They also mention the error of construct underrepresentation, when a test fails to capture or assess all important aspects of a construct adequately.

Less does not mean all is well. You have to be sure that the instrument is representing the construct you want to measure. Find a balance between asking sufficient questions and asking those which are not actually relevant to what you want to measure. The first instrument I constructed suffered from considerable construct underrepresentation. I developed more ideas and items, and eventually reached a point where I had to stop or end up with construct-irrelevance variance. This does not mean that the ideas are bad, rather that they just do not belong in this instrument. Instead, save them for a different investigation.

Tip: How to Write Questions

This is a job that requires a lot of effort, creativity, and patience. Once the type of variables, dimensions, and questions you want are clear, it is time to develop items—lots of them, and lots of versions—until you have the final version. Wiswanathan suggests that redundancy is a virtue during item generation, with the ultimate goal being to cover important aspects of the domain. Even trivial redundancy—that is, minor grammatical changes in the way an item is worded—is acceptable at this stage. Roger Sapsford points out that when you have generated perhaps three or four times as many items as will be needed for the eventual scale, you can go over them more critically and try to improve them as arguments, discarding those which upon second examination do not seem appropriate or well formed.

Finally, there are also books that offer guidance. Arlene Fink published a very good one called ‘How to ask survey questions’. My recommendation is that you can also make a checklist, writing down every tip you have. Examples are to just ask one question per item, watch for poor word choice, and check that your questions are written such that your participants understand what you are asking. I did my checklist and three draft versions until I came up with one that really convinced me.

Tip: Design a Friendly Instrument

The design of the instrument is like a presentation card. You are going to present yourself and your instrument, so the instrument has to be well designed, whether it is on paper or virtual. Make it neat so that the people whom you are asking to answer it do not think ‘this is going to be boring’ because all of the questions seem scrambled together.

List your questions in an orderly fashion, and if you have open questions, make sure there is enough space for people to write their responses. Otherwise, they may end up writing very small and you might not understand what they are trying to say. Also design your instrument keeping in mind both yourself and the people who are going to work on the project because when you have finished the surveys, you need to compile all the information. This could be a long and tiring process, and some errors could occur when you are transcribing the answers, thus modifying the results of the investigation.

In my case, I used an optical reader, and since this required answering the instrument with a special pencil, I gave the instrument along with the pencil to the respondents. This method offers an advantage when transcribing the data but needs other requirements, such as learning how the machine and design program work. I could do this because most of my questions were closed, but when you have open questions you cannot read them with the optical reader. In addition, I had to consider the financial expenditure involved because all of the sheets must be printed, and not copies, so the optical reader could scan them. This elevated the cost, but I decided I would rather spend money and save time.

Usually, the transcript of results is in code. For example, when you ask about gender and the answer is ‘male’, the researcher just types ‘m’. Another advantage of using an optical reader is that you can design a code and the machine gives the results in that code. Whether you use technology or a more traditional tool, it is important to ensure that both are true to the particular code used. For example, it is important when repeat transcribing a survey to not have the codes appear differently or change codes in any other manner.

Tip: Looking for Opinion

Before you go ahead with the application, you need to be sure the instrument is correct. It is therefore time to call for the opinion of experts and laypeople to assess the content validity of the instrument. It might sound redundant, but you need an instrument to measure your instrument. Doris McGartland suggests the opinion of a minimum of six evaluators—three experts and three laypeople—and explain to them what our instrument is designed to measure in the Content Validity Index (CVI). Some authors, however, have criticized this method, but in the end, you have to decide on a posture and whether you have time and resources to carry out with the CVI.

Tip: Meet the Population

Sometimes in a research study, we enclose ourselves in the world of books, but I had to remind myself that my research was for the living people out there. I therefore did an initial practice to evaluate the context, the place, and the kind of people I was going to interview.

I started visiting the Children's Burns Unit and observed how the patients and doctors behaved, the rules, and so on. As an example, one rule I observed was that in order to enter I needed to put on a robe and wash my hands. After a few days, I carried out a technique called ‘Free lists’, and also asked the people I met in the unit to give me some sociodemographic information. This was very revealing because it guided me in the way I needed to administer my instrument and the language I should use.

Error: Instrument by Any Way of Administration

Internet surveys are very easy to administer, but they were not a possibility for the population in my study. Most of the participants did not have a computer or Internet. The way that I needed to administer my survey, therefore, was by direct interview or self-administration, but in any case, I physically had to go and meet the respondents.

Imagine a self-administered instrument, but your survey population does not know how to read and write. This happened to me, and I ended up helping people write their answers. Administering the instrument without thinking about your population is a mistake. My advice is to administer it the way it is going to reach people.

Tip: How Many People to Include in the Sample

For a research study, you need two samples: the pilot sample and the research sample. This can be a difficult issue because your research sample will depend on the type of study you are doing and the formula you use for calculations. It is better to have a friend who understands statistics with you!

Since I was developing a new instrument, for the pilot test, I used the recommendation of George A. Johanson and Gordon P. Brooks, who authored an excellent article discussing this issue. They suggest 30 representative participants from the population of interest. This also helped me to obtain sufficient data and calculate the sample for my plan to do a cross-sectional study since previous data do not exist for the population I wanted to research.

Error: Forgetting the Pilot Test

At this stage in the development of the instrument, you have invested a lot of time and effort, and it is easy to think that everything is correct. But until you do the pilot test, you are not going to be sure if you need to change something.

I had my instrument finished and was tempted to skip the pilot test and start the research immediately with my target population. However, in the end, I decided to apply the pilot test anyway. In just the first interview, I realized that a question was duplicated. In some cases, this is not too bad if you are doing a direct interview since you can just skip it. In my case, it was much more significant because my instrument was a self-administered questionnaire. Imagine if I had sent the instrument without knowing and people had answered the two questions but with different answers! Which should I then take as the valid one? Lesson learned: always do a pilot test.

Tip: Acknowledge That There Is Bias in Answers

We all have a little Jiminy Cricket inside of us, but sometimes we ignore the fact that we should tell the truth. Even genuine questions can lead to untruthful answers. For example, have you used violence to punish your child? Even if parents do actually do this, maybe they are going to say no because it is not socially acceptable. To prevent biases in answers, we have to be sensible in the way we ask the questions.

Error: Bad Application and Bad Ethics

The time required to complete the instrument will depend on the number of questions you have and their design. Many people are willing to give you time that they will never get back, so you should address them with respect and also be empathetic to the situation; for example, sometimes I have to interview parents when they are not in the best emotional moment because they have just received bad news about the treatment of their child with burns. In that situation, it was better to reschedule for later. For a start, it is a good idea to read a book about how to do interviews or practice completing the instrument with someone and ask them if they felt uncomfortable with the questions or the way you are approaching to them.

Lisa M. Gilman notes that it is important to ensure no harm is done to any survey respondent and no survey respondent is unduly pressured or made to feel obliged to participate. It is also ethically important to make sure the participants know what they are getting involved in, and it is important that they give informed consent. Colin Robson gives a good example of informant consent and advice on achieving it.

Tip: Statistics and the Translation of Data into Conclusions

When the application is finished, there will be lots of data. Simply having numbers is useless; the interpretation is what makes the connection with the research and determines what numbers mean for your population, what facts you can find, and what conclusions you can derive. For example, when I asked mothers about the probability of their child having an accident at home, their answer corresponded to very low values, meaning that they estimated a low probability of this happening. This is called optimism bias, that is, accidents happen but not to me. It is time to link numbers with theory.

Error: Not Publishing

Designing and calculating reliability or validity in a pilot test is not enough. The main purpose of the instrument is to apply it to a real population. Bruce Thompson explains that ‘in some respects, measurement does remain something of a stepchild. In journals, reliability and validity estimates are often missing and in doctoral dissertations are negative in sign and large in magnitude’. With regard to reporting Cronbach's alpha, Thompson believes that sometimes researchers report that ‘the instrument is reliable’ when it is actually not reliable; what is reliable are the scores. The point here is that it is necessary to refer to results appropriately.

Knowledge grows every day, but this only happens if we share it. Even when the expected results are low, it is important to publish the process because others can learn from your bad results and what to do right in the future. Also, it is important to learn how to report reliability and validity and give importance to the subject.

Tip: How to Publish

A good start for publishing is to search for prospective journals and check their publication requirements. Almost all the journals have a guide for authors, and there are some journals that have guides for designing and validating of instruments.

At present, I am waiting to find out if one of my articles has been accepted. The particular journal to which I submitted my article does not have a validation guide, but I used one called Guidelines for Reporting Reliability and Agreement Studies (GRRAS) composed by Jan Kottner.

Tip: Find a Consultant

Sometimes, it is hard to know how to do things by ourselves. We need some advice or another perspective from someone more experienced. If you are pursuing a degree, maybe you have a tutor and teachers whom you can ask, but even then it may be difficult to find someone who knows what you want to do. Some people charge for advice, some are glad to help you without payment, and some of them might be interested in your research and decide to participate as coauthor in exchange for help. Look for help if you feel weak in a particular area.

When I decided I wanted to do multidimensional scaling but my tutor did not know how to apply it, I researched the method, read a lot, and decided that I needed to talk to someone about my queries. It took me some time, but I eventually found someone glad to help and coauthor my work.

Tip: Keep a Journal of Research

Keep an investigation diary, a list of sources, reminders, goals, and even inspiring phrases. It does not matter whether it is in a notebook, in your iPad, or in a document on your computer. Personally, I prefer a notebook and always carry a small one around because you never know from where ideas are going to come. It was more practical to me to write it down and check it later. Regardless of whether it is an electronic document or on paper, always keep a copy. You never know what might happen, and a prepared researcher always backs up their information.

Error: Being Desperate about the Process of Developing an Instrument

I wish I could say that creating an instrument is a process of going smoothly from one step to the next, but this is not the case. The process of validating an instrument always involves retracing your steps and perhaps taking another way, going further, and then going back a little bit and then making turns. Do not despair; just enjoy the trip and the destination!

The trick is developing a little bit of tolerance to frustration. For example, Dr Patricia Lorelei Mendoza of the University of Guadalajara once told me that people who achieve a particular grade or who do a research project develop something more than the results of the research: emotional intelligence.

To the uninitiated, an instrument might look like just a few pages of questions, but there is a lot of work and quite a few headaches! However, it is very rewarding to appreciate the final result and achieve the goals of the investigation.

Exercises and Discussion Questions
  • Write a letter to yourself explaining why you chose your research topic.
  • Search for a literature review article on your topic.
  • Search for an instrument related to your topic. Does it have reliability and validity? Does it fit your objectives?
  • Define your objectives and methodology. Ask yourself if you have a date imposed or elected by you to finish your research?
  • Define your variables and dimensions. Does your instrument avoid construct-irrelevance variance or construct underrepresentation?
  • Make your own checklist to evaluate your instrument and others.
  • Seek experts on your topic and ask them if they are willing to evaluate your instrument.
  • Find a journal in which you would like to publish and read the guide for authors. Make a checklist of the requirements they request. Does your research fulfill them?
Further Reading
Fink, A. (2002). The survey kit (
2nd ed.
). Thousand Oaks, CA: SAGE.
Garson, D. (2012). Reliability. Asheboro, NC: Statistical Associates.
Garson, D. (2012). Validity. Asheboro, NC: Statistical Associates.
References
Brown, H. J. (1994). On marriage and family. Nashville, TN: Rutledge Hill Press.
Fink, A. (1995). How to ask survey questions. Thousand Oaks, CA: SAGE.
Fischhoff, B., & Kadvany, J. (2011). Risk. New York, NY: Oxford University Press. http://dx.doi.org/10.1093/actrade/9780199576203.001.0001
Gilman, L. M. (2008). Pilot test. In P.Lavrakas (Ed.), Encyclopedia of survey research methods (pp. 584–586). Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781412963947http://dx.doi.org/10.4135/9781412963947
Johanson, G. A., & Brooks, G. P. (2010). Initial scale development: Sample size for pilot studies. Educational and Psychological Measurement, 70, 394–400. doi: http://dx.doi.org/10.1177/0013164409355692http://dx.doi.org/10.1177/0013164409355692
Kottner, J., Audigé, L., Brorson, S., Donner, A., Gajewski, B. J., Hróbjartsson, A., … Streiner, D. (2011). Guideline for reporting reliability and agreement studies (GRRAS) were proposed. Journal of Clinical Epidemiology, 64, 96–106. doi: http://dx.doi.org/10.1016/j.jclinepi.2010.03.002http://dx.doi.org/10.1016/j.jclinepi.2010.03.002
McGartland, D., Berg-Weger, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27, 94–104. doi: http://dx.doi.org/10.1093/swr/27.2.94http://dx.doi.org/10.1093/swr/27.2.94
Miller, D. C., & Salkind, N. J. (2002). How researchers create their own scales: An activity of last resort (
6th ed.
). Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781412984386http://dx.doi.org/10.4135/9781412984386
Peat, J. K., Mellis, C., Williams, K., & Xuan, W. (2002). Health science research. Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781849209250http://dx.doi.org/10.4135/9781849209250
Rasinski, K. (2008). Chapter 33: Designing reliable and valid questionnaires. In W.Donsbach & M. W.Traugott (Eds.), The SAGE handbook of public opinion research (pp. 361–374). Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781848607910http://dx.doi.org/10.4135/9781848607910
Reinard, J. (2006). Communication research statistics. London, England: SAGE. doi: http://dx.doi.org/10.4135/9781412983693http://dx.doi.org/10.4135/9781412983693
Robinson, S. E., & Stafford, M. E. (2006). Testing and measurement. Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781412986106http://dx.doi.org/10.4135/9781412986106
Robson, C. (2000). Ethical and political considerations. In C.Robson (Ed.), Small-scale evaluation (pp. 28–44). Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781849209885. Retrieved from http://dx.doi.org/10.4135/9781849209885
Sapsford, R., & Jupp, V. (2006). Data collection and analysis. Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781849208802http://dx.doi.org/10.4135/9781849208802
Thompson, B. (2003). Score reliability. Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781412985789http://dx.doi.org/10.4135/9781412985789
Wiswanathan, M. (2005). Measurement error and research design. Thousand Oaks, CA: SAGE. doi: http://dx.doi.org/10.4135/9781412984935http://dx.doi.org/10.4135/9781412984935

Copy and paste the following HTML into your website