According to Krotoski (2010: 2), the development of the Internet over the last few decades has resulted in researchers experiencing a ‘golden age of research online'. Online research methods have correspondingly proliferated in all fields of social science. Use of these methods mitigates the distance of space, enables research to be easily internationalised without the usual associated travel costs and can be valuable for researchers contacting groups or individuals who may otherwise be difficult to reach (see, for example Barratt, 2012; McDermott and Roen, 2012). Over the last decade online research methods have become firmly established as a legitimate means of data collection for social scientists, removing some of the ‘considerable anxiety about just how far existing tried and tested research methods are appropriate for technologically mediated interactions’ (Hine, 2005: 1). Indeed, the use of an Internet-mediated methodology is moving from the realm of the novel and innovative into the mainstream and routine. This is particularly the case with online surveys and email interviews, which have flourished in many sub-disciplines of the social sciences. In contrast, online synchronous interviewing (somewhat surprisingly) still remains a relatively uncommon approach to online data collection, although its use is also on the rise based on interfaces such as instant messaging (Hinchcliffe and Gavin, 2009) and video-based technologies such as Skype (Cater, 2011; Hanna, 2012; Deakin and Wakefield, 2014).
This chapter provides an overview of online interviewing. It begins by examining the use of asynchronous and synchronous online interviews. The chapter goes on to debate some of the advantages and limitations of online interviewing, particularly in relation to conventional face-to-face interviewing. Some useful sources to aid consideration of online ethics are then briefly discussed. A more practical technical section follows which advises on appropriate software for the [Page 417]conduct of online interviewing. Finally, we conclude by reflecting on the methodological progress and future of online interviewing.
The use of online interviews in social science research has become more widespread over the last decade (Hooley et al. 2011; James and Busher, 2009; Salmons, 2015). Wilkerson et al. (2014) offer useful guidelines for making decisions about the design and conduct of qualitative online research. These relate to questions about whether an online study is the most appropriate approach for a particular research project and whether to employ synchronous or asynchronous methods (see Table 24.1). To aid decisions surrounding which type of interview strategy to employ, the following section of the chapter examines the use of synchronous and asynchronous online interviews in social research (see Table 24.2). We begin by exploring the asynchronous interview.
|Table 24.1Decision-making checklist for type of online qualitative data collection|
|Directions: answer the following questions to decide between online or offline study design. If you respond ‘Yes' to most items, consider online data collection.||Yes – online||No – offline|
|Data collection considerations|
|Directions: answer the following questions to help you determine whether you should use synchronous (yes) or asynchronous (no) data collection, or both. If your responses are in both columns, consider whether it would be beneficial to use both data collection methods.||Yes – synchronous||No – asynchronous|
Source: Abridged from Wilkerson et al. (2014: 569–71).
Online interviews, conducted in non-real time or asynchronously, are now a fairly common data collection strategy used by social scientists. There are now numerous examples of research carried out using asynchronous interviews, most often facilitated via email (see, for example, Mann and Stewart, 2000; Illingworth 2001, 2006; Kivits 2004, 2005; James and Busher, 2006; James, 2007; Ison, 2009; Bjerke, 2010; Burns, 2010). Indeed, interviews conducted through the use of email have been one of the most widely used online methods to date.
There are a number of advantages to using an asynchronous online interview, not least the relative technological simplicity of email. However, it is important to remember that for some individuals, techno-competence may be inhibited by disabilities such as dyslexia or visual impairment (Clark, 2007) or other, more physical limitations which may make computer use difficult. However, Bowker and Tuffin (2004: 230) suggest quite the opposite, arguing that ‘the flexibility surrounding online data gathering may aid participation for those with disabilities. Indeed, irrespective of physical coordination, mobility and speech capacity, the textual nature of online interaction affords people with diverse operating techniques the capacity to participate'. Ison (2009) supports this stance, illustrating that email interviews are particularly suited for people with verbal communication impairments, such as cerebral palsy, because the flexible and asynchronous nature of the email interview can increase opportunities for participant involvement and enhance the quality and inclusiveness of research data.
A second distinct advantage of the email interview is that interviewees can answer the interview questions entirely at their own convenience. There are no time restrictions and this can be particularly valuable when participants are located in different time zones. Emails can be answered any time of day or night that suits the respondent. The lack of temporal restrictions also enables both the interviewer and interviewee to spend time considering their questions and answers, and perhaps composing, recomposing and editing responses to questions. James (2007) shows how this can enable the research process to become more reflexive, allowing both researcher and participant to reflect on the interview data and experience. That said, email interviews can also be used to construct an ‘almost instantaneous dialogue between researcher and subject … if desired’ (Selwyn and Robson 1998: 2), responses can be immediate and a relatively fast-paced exchange of questions and responses can be achieved.
Nevertheless, James and Busher (2006: 417) suggest that an advantage of email interviews is that there is no need for the exchange to be fast-paced. They stress that much of the value of email interviews lies in the opportunity for respondents to think about their responses, ‘drafting and redrafting what they wanted to write’ (p. 406). Indeed, they conclude by suggesting that email interviews are particularly suitable when ‘snappy answers are not required'. Although email interviews do allow respondents considerable time to compose, edit and redraft responses to questions, this could be perceived as a disadvantage. A response that has been so well-considered and carefully thought about is likely to produce a ‘socially desirable’ answer rather than a more spontaneous response which can be generated through synchronous interviews or by more traditional face-to-face interviews (Joinson, 2005).
Some of the advantages of email interviewing can then also represent disadvantages. For example, although technologically an email interview may be simple to administer, it is also easy for a respondent to ignore or delete emails if s/he is too busy or loses interest in the process. The frequent time lag between an interviewer posting a question and the interviewee emailing a reply may result in a certain level of spontaneity being lost and this may impact on the richness of the data generated. Sanders (2005: 75–6) compared the data gathered via email interviews to that collected in face-to-face interviews using the same structure and questions and found that the email interviews did not generate the same quality of data. She argues that
the essence of the inquiry was often misunderstood or answers would diverge to other subjects. It was difficult to maintain the flow of dialogue … and because of the asynchronous nature of email contact, the lack of spontaneity meant that it was difficult to probe and threads were easily lost.
Sanders, 2005: 75–6
Additionally, the reliance on a text-based interview process can also lead to the researcher becoming ‘a victim of his or her own imagination and preconceptions’ (Bjerke, 2010: 1718) when interviewing people via email. In a situation where the participants and the researcher cannot see or hear each other and the researcher has to rely solely on the written text to understand the participant, there are concerns that valuable nonverbal data may be lost in the email interview process. Further issues revolve around the fact that researchers cannot safely ask ‘knowledge’ questions because respondents [Page 420]can simply check the answers on the Internet. Finally, there may still be age/generation differences in how comfortable respondents are with computer-mediated interaction via email. Despite the increasing prevalence of Internet access amongst the UK population, older people remain less likely to use the Internet and a ‘grey digital divide’ persists (Morris, 2007).
Despite these complexities involved in email interviews, there are clearly further advantages of this asynchronous online method. First, the time-consuming nature of transcription of interviews is reduced, if not eliminated altogether. As Burns (2010: 11.3) notes ‘comments, opinions, interpretations, even humour reflecting on various things’ were already transcribed in her email interviews. Second, Burns (2010) also suggests that because email interviews are interactive on an individual basis, in that the researcher responds to the interests and responses of an individual participant, the result is a more ‘personal touch’ to the interview process. Finally, on the practical front, because online email interviews remove the need to travel to an interview venue, the cost of email interviews is minimal.
In contrast to the growing body of literature that focuses on asynchronous interviews, there has been more limited academic assessment of the advantages and limitations of synchronous online interviews. Indeed, with the exception of an early flurry of research which used synchronous interviews (Gaiser, 1997; Smith, 1997; Chen and Hinton, 1999; Mann and Stewart, 2000; O'Connor and Madge, 2001), there have been relatively few recent empirical studies (Hinchcliffe and Gavin, 2009; Enochsson, 2011; Jowett et al., 2011), although there is a growing body of work examining Skype as a medium for synchronous interviewing, using both audio and video (Cater, 2011; Hanna, 2012; Deakin and Wakefield, 2014).
The reasons for the low take-up of synchronous interviewing are unclear. Certainly online synchronous interviews can be more complicated to set up than a basic email interview and this may, in part, explain the lower levels of usage of this type of interviewing. For example, a researcher planning to generate data in this way must begin by selecting an appropriate software package such as conferencing software (Madge et al., 2009; Sedgwick and Spiers, 2009) or access to a chatroom or instant messaging service (Hinchcliffe and Gavin, 2009; Barratt 2012; McDermott and Roen, 2012) to facilitate the interview. This can be perceived as requiring rather sophisticated technological skills compared to the use of email, which may act as a disincentive for using this approach. Moreover, as Deakin and Wakefield (2014: 605) note, some participants may not have the technological competence, familiarity with online communication, software requirements or regular high-speed Internet provision to enable them to participate in a synchronous online interview, which may act as a further disincentive.
However, this type of interview does also have distinct advantages and, in many respects more closely resembles a conventional face-to-face interview, thereby overcoming some of the limitations of an online asynchronous exchange. As Chen and Hinton (1999) have observed, ‘real time’ online interviews can provide greater spontaneity than online asynchronous interviews, enabling respondents to answer immediately and, in the case of synchronous focus groups, interact with one another.
Perhaps the most widely used approach to online synchronous interviews has been facilitation through conferencing software (O'Connor and Madge 2001; Madge et al., 2009; Jowett et al., 2011). Relevant software can be downloaded by the participants and the chatroom type environment facilitates the synchronous nature of the interviews. Figure 24.1 illustrates a typical conferencing interface as seen by participants. The screen consists of a number of different windows and a tool bar. There is a large ‘chat’ window in which the dialogue is displayed, beneath this is a smaller window where users type their text, and press return; seconds later the contribution is displayed, prefixed with their name.
Source: O'Connor and Madge (2001).
Such interfaces are most familiar to those who regularly use ‘chatroom’ facilities. This may mean that such an approach to interviewing is most suited to individuals who regularly use chatrooms, for example teenagers (Enochsson, 2011), university students (Hinchcliffe and Gavin, 2009) or specific ‘marginalized’ communities (Barratt, 2012; McDermott and Roen, 2012).
An important advantage of the synchronous interview already alluded to, is that the real time nature of the exchanges has much in common with the conventional onsite interview. Unlike asynchronous interviews, where there is time to edit and redraft responses, synchronous interviews can generate more spontaneous answers. This can result in responses being more ‘honest’ in nature as there is little time to consider the social desirability of the response in the ‘fast and furious environment’ (Mann and Stewart, 2000: 153) of the synchronous chat. A downside of this environment is that the fast-paced nature of the discussion generates interview transcripts which can be difficult to interpret. Contributions can be fragmented and rarely follow a sequential form because the interviewer may post a new question before the respondent has fully replied to the previous question. This results in a transcript that resembles a ‘written conversation'. On the positive side, however, as with email interviews, there is no need for the researcher to transcribe interviews as transcripts are automatically created. That said, Jowett et al. (2011: 358) found that it took at least twice as long to produce a comparable amount of transcript data in online interviews compared to those conduced face-to-face. Although online [Page 422]interviewing may therefore be more time efficient for the researcher (reducing travel times, for example), it may be more time consuming for the participant, and result in the production of less data.
Most synchronous interviews employed are based on textual interactions. However, recently an increasing number of studies are employing Skype, a free synchronous online service that provides the opportunity for audio or video interviewing. On Skype, interviews can be conducted in real time via the instant messaging feature, which allows multiple users to participate simultaneously by typing their comments in a ‘common room’ (Moylan et al., 2015: 41). All conversations are saved in Skype and can then be searched for keywords or concepts. The conversation can also be exported in plain text format into programs such as Microsoft products (Word or Excel) or a specialised qualitative data analysis program (Moylan et al., 2015: 41). Skype has greater national and international recognition than other online software applications that are available and the video calling facility provides the researcher with an opportunity to not just talk to their respondent but also to see them in real time (Deakin and Wakefield, 2014). Sullivan (2012) therefore suggests that Skype interviews can provide access to verbal and nonverbal cues, which are not available in text-based online interviewing, thus providing an equal authenticity level with face-to-face interviews, although Cater (2011) observes that the ‘head shot’ provided by the webcam may create obstacles in observing all of the participant's body language. This ‘head shot’ problem can be overcome by other video-teleconferencing applications, such as Access Grid, which is also advantageous owing to its lack of lag and freeze problems (Fielding, 2011). Hanna (2012) further observes that audio and video data can be easily downloaded onto the researcher's computer work station, although technical hitches, such as webcams not functioning correctly, can also impede the interview process. Table 24.3 summarises some of the benefits and drawbacks of Skype interviews (based on Deakin and Wakefield, 2014).
|Table 24.3Benefits and drawbacks of Skype interviews|
|Recruitment||Allows interviewees and interviewer flexibility in terms of organising the interview time||Potential interviewees may be put off participating if they do not know how to use Skype|
|Logistical and technological considerations|
|Rapport||In the majority of cases, building rapport can be established just as well as in face-to-face interviews. Exchanging emails, messages or reports can facilitate this process||When interviewing a reserved interviewee, building rapport can be difficult|
|Audio or video||Audio and video allow interviewees to choose the level of contact they wish to engage in||Video is not possible in some cases as it can reduce sound quality|
|Absentees||Time and money have not been spent if the interviewee does not log on to complete the interview||Participants appear to be more likely to ‘drop out' of the interview last minute or without notice|
Source: Deakin and Wakefield (2014: 613).
There are also a number of key differences between synchronous and asynchronous interviews. These differences relate to the choice of software, the virtual interface and the temporal characteristics of each type of interview. However, many other challenges presented by the virtual venue are remarkably similar regardless of the type of online interview. In the following section, we go on to consider in detail some of these affordances and limitations of online interviewing, particularly in relation to face-to-face interviewing.
Researchers who have used online synchronous or asynchronous interviews report many differences between online interviews and face-to-face interviews. There are now several useful sources which can act as a guide to the practice of online interviewing (see, for example, Hooley et al. 2011; James and Busher, 2009; Salmons, 2015; Wilkerson et al., 2014). It is no longer simply the case that ‘face-to-face interaction … becomes the gold standard against which the performance of computer-mediated interaction is judged’ (Hine, 2005: 4): online interviewing is now increasingly valued in and of itself as a valid and legitimate research method. That said, many challenges still remain, and there is still a divergence of opinions over the suitability and validity of online interviewing. Jowett et al. (2011: 366), for example, still consider that there ‘is a lack of reflection and reflexivity’ surrounding online interviewing and that ‘there remains no clear consensus about the suitability of the Internet as medium for conducting qualitative interviews', while Deakin and Wakefield (2014: 604) argue that online interviews are still often presented as a ‘second choice’ to the ‘gold standard’ of face-to-face interviews. This notion implies that offline methods are ‘problem-free’ and without their own limitations and disadvantages. In the same way that the discussion of the differences between quantitative and qualitative research methods often ‘ends up being addressed in terms of what quantitative research is not’ (Bryman, 2004: 267), online methods are often debated with a focus on what they lack. This rather ignores the pitfalls that can be associated with offline interviewing as much as online interviewing and the different possibilities offered by each approach.
We shall now consider some of the challenges that remain, including online recruitment, representativeness, interview conduct and design, respondent identity verification, building rapport and online interaction.
A key concern for conducting both onsite and online interviews is the recruitment of an appropriate group of respondents. The Internet provides access to groups of users with tightly defined and narrow interests, for example, new parents (O'Connor and Madge, 2001), breast cancer patients (Sharf, 1997; Orgad, 2005), users of health-related websites (Kivits, 2004, 2005), university students (Hinchcliffe and Gavin, 2009) or ‘hard to reach’ populations (Mann and Stewart, 2000).
However, although participants with narrowly defined interests are potentially easy to locate online, the process of recruitment can be complex. One approach to gaining access [Page 424]to users of specific websites is through contact with website page owners or moderators directly. For O'Connor and Madge (2001), whose interest was in new parents’ use of a particular parenting website, contacting the website providers directly was a logical first step in accessing respondents. Similarly, both Murray and Sixsmith (1998) and Kivits (2004) accessed respondents by contacting the ‘moderator’ of the boards and arranging access and permission to use the site for contacting participants. Such an approach can also result in valuable publicity and support for the research. Increasingly social networking sites such as Twitter and Facebook have proved to be fertile grounds for recruiting respondents with shared or narrowly defined interests (Moore et al., 2015).
Researchers report varying levels of success with different approaches to recruitment. One approach is the posting of a general message to a bulletin board, introducing the research and advertising for volunteers to participate. Care must be taken, however, when posting to discussion groups to request participation. Hewson et al. (2003:116) suggest that netiquette demands that postings to a newsgroup or discussion forum should be relevant, but this poses a problem because most researchers’ invitations to join a research project will not be directly relevant to the intended discussion. This raises ethical issues for the online researcher. The best practice is to approach the moderator of the list or newsgroup or discussion forum directly to get permission for the invitation posting but to be sensitive to the fact that such an invitation may be considered spamming and therefore unacceptable (Madge, 2012).
Selecting research respondents from the online world also raises issues of representativeness, common to all social science research. However, there are issues associated with the Internet that raise issues of representativeness specific to the type of research, not least access to the Internet itself. As Mann and Stewart (2000: 31) suggest, ‘access to the Internet is a matter not only of economics, but also of one's place in the world in terms of gender, culture, ethnicity and language'.
The digital divide can therefore still be a very real barrier and some individuals and geographical areas are less Internet-connected than others. This raises a serious shortcoming of Internet-based research, often promoted as offering research potential unrestricted by geographical boundaries. Online research methods remain
very geographically specific, limiting who we can ‘speak’ to and whose lives we can engage with. The potential to be involved in a study using online research methods is, therefore, partial, so any grand claims of the utility of such methods for internationalizing research must be treated with some caution.
Madge, 2006: n.p.
Orgad's (2005) work is a good example of online research that she acknowledges suffers from biases outlined earlier. Her research, which was focused on users of breast cancer related online spaces, was biased in a number of ways. First, participants were recruited through specialist websites which were located by searching for only ‘top-level global domain websites’ (defined as those with addresses ending with .com, .org and .net). As a consequence of this rather restricted search process, the research suffered a North American bias as all other ‘national domain websites’ were excluded from the study. She also restricted her research to English language websites.
Other issues can impact in the representativeness of online research. For example, there is no central register of Internet users and although some websites may have membership lists, these do not include ‘lurkers’ or individuals who have chosen not to register. Likewise, a sample group drawn in the ways outlined earlier will inevitably exclude from [Page 425]the sample those individuals who chose not to answer calls for respondents.
Finally, Salmons (2015: 127) makes an important distinction between interviewing online and sampling and recruiting online, which depends to a great extent on whether the research is concerned with ‘online or technology-mediated behaviours, culture, practices, attitudes or experiences’ or whether the Internet is being used simply as a means of recruiting participants for research into offline lives. She, like Comley (1996) and Coomber (1997) almost two decades earlier, suggest that the Internet is particularly suitable as a methodological tool when researching specific groups of Internet users. Gaiser (1997: 136) is in agreement, stating that: ‘…if the research question involves an online social phenomenon, a potential strength of the method is to be researching in the location of interest'.
Much of the existing research based on data generated through online interviews has to date focused on adapting offline practices, such as techniques for building rapport (O'Connor and Madge, 2001). Researchers have stressed the importance of replicating, as closely as possible, the face-to-face method, with James and Busher (2006: 405) seeking a methodological approach that ‘replicated as closely as possible … the normal processes of qualitative, face-to-face interviewing'.
Conventional interview etiquette, as well as procedural research ethics protocol, suggests that in a face-to-face interview, the interviewer begins by providing a brief introduction to the research project, an explanation of the interview procedure and perhaps a general overview of the questions included in the interview. In most cases, the interviewer would have had prior contact with the interviewee, making initial contact and arranging a suitable venue and interview time. During these interactions, the research project would have been introduced and the research project aims outlined. The virtual interviewer will often lack these early interactions, and opportunities for the building of rapport, gleaning facts concerning profile data and ensuring that the participant feels at ease are possibly missed. It is important, therefore, for the virtual interviewer to develop strategies that compensate for the lack of face-to-face meetings. These strategies are discussed in more detail next.
Before commencing the interview, there is a need to decide how to inform participants about the interview procedure, for example a brief introduction to the aims of the interview, the estimated length of the interview and the types of question. It is also particularly important that a mutually convenient time to conduct the online interview is arranged, given that interviewees may be in different time zones or have variously timed work commitments (Jowett et al. 2011). It may also be necessary to remind participants how to contribute to an online discussion. For example, James and Busher (2006: 408) sent participants detailed ‘rubrics’ explaining the format of their email interviews and outlining data protection and privacy issues; O'Connor and Madge (2001) also provided participants with general information and an explanation of the process at the outset of their interviews (see Box 24.1).
This introduction was followed with another prepared piece of text that introduced the researchers by describing their gender, age, ethnicity and family and employment status. This was done with two specific aims in mind – in the absence of visual cues O'Connor and Madge (2001) wanted to create a text-based picture of themselves, first to facilitate rapport and second to elicit profile data from the respondents, which would have been visually apparent in a face-to-face interview. This method of establishing respondent identity and building rapport is discussed in more detail next.
In the virtual setting, the interviewer cannot make any assessment of the socio-demographic information which may have an impact on the interview. Indeed, Ward (1999) found that as a consequence of this, interviewees asked her questions about her own socio-demographic profile, which changed the power relations of the interview and gave her less control as an interviewer. It is perhaps necessary, therefore, to find other ways of obtaining socio-demographic information and to adapt conventional techniques accordingly. O'Connor and Madge (2001) made use of carefully designed personal introductions to allow for the loss of face-to-face interaction and in the hope that participants would follow their ‘model’ and provide similar profile information, such as age, number and age of children and ethnicity. This approach proved successful and respondents mirrored the contributions of the researchers, providing detailed profile data, which also gave respondents information about the other members of the focus group.
Although such methods can be successful, Thurlow et al. (2004: 53) suggest that this mechanism is unnecessary in the virtual world. They argue that questions which would be unacceptably direct in a face-to-face encounter are widely used and accepted in the online environment. For example, abbreviations such as A/S/L or ALSP are often used to request information on the age, sex, location and a picture of those online. Of course, another advantage of the online interview is that there is no need for any participant to divulge personal information and encounters can be anonymous. This can help to minimize interviewer bias and can help when discussing sensitive topics with respondents who do not want to be identifiable in any way. The corollary of this is that participants may not always be what they seem because it is possible in an online environment to hide or invent personas. Hewson et al. (2003) argue that researchers cannot [Page 427]ever be certain of respondent identity in an online situation because there is always the possibility of users inventing an online personality or at least not being entirely truthful in describing themselves. The issue of verifying respondent identity in an online setting is discussed in more depth later in the chapter.
The anonymous nature of online research and its lack of visuality may present researchers with new challenges. Visual cues are absent from non-video mediated online interactions and this renders traditional interview techniques such as nods, smiles and silences redundant, although Skovholt et al. (2014) do suggest that emoticons may act as ‘contextualization cues', providing information about how specific online communication is supposed to be interpreted. Other issues arise, such as online silence, which can represent a number of scenarios – it could be that the respondent has withdrawn from the research or it could be that he/she has been interrupted by someone/something else or it could be due to a hardware or software problem. As O'Connor and Madge (2001: 10.11) found, a silence may occur because the respondent is ‘thinking, typing or had declined to answer the question'. The interviewer can interpret silences in any of these ways. It is important, therefore, that the researcher puts strategies in place to cope with such silences. James and Busher (2006) sent chatty reminder emails to non-responders during their email interviews. O'Connor and Madge (2001) dealt with silences by very direct questioning as to the whereabouts of the respondent – in a manner which may have been construed as impolite in face-to-face encounters. In deciding how to handle ‘silences', it is imperative that the online researcher acts in an ethical manner, allowing respondents to use silence as a way of withdrawing from the research. Ethical issues relating to withdrawal are discussed in more depth later.
Although a lack of visual indicators means that it can be difficult to make use of conventional interviewing tools, this is more than compensated for by other advantages of the virtual arena. A key advantage of the anonymous nature of online interaction is that there are no nonverbal cues to misread, which can also potentially place respondents on a more level playing field. Moreover, respondents, secure in the knowledge that they are anonymous, have been found to answer with far more candour than those taking part in face-to-face interviews. As Hinchcliffe and Gavin (2009: 331) noted, respondents in their study ‘valued perceived anonymity over embodied experience … (and) reflected that they felt they could be more honest as they were not in the presence of another person'. Similarly, Enochsson (2011: 20) found that the young people in his study particularly valued anonymity, enabling them to ‘write about difficult matters because there is time to think'. This was particularly the case for girls, who wrote longer answers in the online interviews compared to those conducted face-to-face. As such, online researchers report that the virtual interview is frequently characterised by the candid nature of responses.
Building rapport online, without the usual visual cues used in a face-to-face interview, can be a challenge for the online interviewer. Research conducted face-to-face relies quite heavily on visual cues and such cues can be helpful in building rapport. In the disembodied online interview, both the interviewer and interviewee are relying on the written word as a means of building rapport. The interviewer cannot use body language (facial expression, body posture) or vocal qualities (tone, speed, volume) to interpret what the interviewee is saying (Jowett et al., 2011: 360). Orgad (2005: 55) has therefore argued that ‘there is a real challenge in building rapport online. Trust, a fragile commodity … seems ever more fragile in a disembodied, anonymous and textual setting'.
One technique which online interviewers have used is sharing personal information [Page 428]as a means of creating virtual rapport. Both Kivits (2005) and O'Connor and Madge (2001) shared such information to replicate, online, the kind of rapport they believed would have occurred ‘naturally’ in a face-to-face meeting. O'Connor and Madge (2001) were influenced by feminist approaches to research, which stress the importance of equal power relationships within interviewer/interviewee exchanges and self-disclosure on the part of the interviewer. Within such approaches it is suggested that shared characteristics between interviewer and respondent will often result in a good level of rapport, with minimum effort. By developing detailed textual exchanges rich with self-disclosure and by posting visual aids, they aimed to create virtually what would exist in a face-to-face environment. They stressed aspects of similarity between themselves and their respondents such as gender, age, ethnicity, limited parenting experience and the challenge of arranging life around young children and newborn babies to create an interview environment which was ‘anonymous, safe and non-threatening’ (2001: 11.2).
However, it may be that going to such lengths to replicate traditional interview methods in an online setting is a misplaced technique. As suggested earlier in this chapter, the use of online interviews thus far represents little more than a change of ‘place'. Aside from interviewing in a virtual rather than a ‘real’ space, online researchers have done little more than transfer conventional, and in some cases outdated, approaches to a new arena. However, progress made in the offline world has not necessarily been reflected in online research practice. For example, offline researchers have begun to question the value of self-disclosure as a means of stressing similarities in the interview process. Abell et al. (2006: 241) suggest that the success of the self-disclosure strategy depends ‘upon acts of “doing similarity” being received as such by the respondents'. They stress that there is a real risk that respondents will not perceive self-disclosure in the way it is intended and, rather than encouraging rapport, this technique may serve to inhibit the respondent. They go on to argue that ‘often through a sharing of experiences, the interviewer paradoxically exemplifies differences between themselves and the interviewee'. In an online environment where ‘a stranger wanting to do academic research is seen as an unwelcome, arbitrary intrusion’ (Paccagnella, 1997: 3) and where there may therefore already be a risk of the researcher being perceived as an ‘outsider', it is important that researchers are aware of current debates, not just online but also offline.
Throughout this chapter, we have touched upon the ethical challenges presented by online interviewing. These issues are covered in much greater depth by Ess (2009), Krotoski (2010), Whiteman (2012) and Eynon et al. (this volume). Whilst many of the ethical dilemmas that arise when conducting online interviews may mirror those faced when carrying out face-to-face interviews (Krotoski, 2010: 4), an online researcher will undoubtedly also be faced with ethical challenges specifically pertaining to the online environment. This has resulted in a series of guidelines being produced to help researchers weave their way through the process of online research in an ethical manner. These include a general set of guidelines produced by the Association of Internet Researchers (AoIR) Ethics Working Committee (Markham and Buchanan, 2012) and more subject specific guiding principles, for example in geography (Madge, 2012), education (Convery and Cox, 2012) and psychology (British Psychological Society, 2013). According to Convery and Cox (2012: 50), it is unrealistic to expect that any single set of guidelines can cover all ethical situations of online research for there ‘is simply too much diversity across Internet cultures, values and [Page 429]modes of operation'. Rather they argue for a form of ‘negotiated ethics', a situated approach grounded in the specifics of the online community, the methodology and the research question(s). This does not mean an ‘anything goes’ relativist approach, rather an open, pluralistic policy in relation to online ethical issues (Ess, 2009).
Having noted some useful sources to aid consideration of online ethics, the following section of this chapter moves on to introduce more practical advice. A technical guide is provided that includes information on selecting software for online interviewing.
A wide variety of software and services are available to facilitate online communication and, depending on the context of the research, it is possible for researchers to make use of any of these to carry out online interviews. However, as Salmons (2015: 74) warns: ‘The moment you write about (or worse, buy) any kind of software or hardware, a new option is bound to appear that is smaller, lighter and faster'. This warning is worth heeding when planning any kind of online interview. In the following section of this chapter an overview of some of the more common types of software and services available for asynchronous and synchronous online interviews with individuals and groups is provided.
Software for asynchronous interviews can be divided into two types: email applications and discussion board software and services. Email is particularly appropriate for individual interviews, although the ‘copy-to’ function of most email applications may allow their use for small group interviews. The main advantages of using email are that it is more likely to be familiar and available to researchers and participants, it does not present problems with the compatibility of different software and systems, and it allows responses to be made privately. Discussion board software and services are more likely to be of use for asynchronous group interviews because they allow multiple participants to view and respond to postings from the researcher or other participants when convenient. Like email, discussion boards are unlikely to present compatibility problems and any participant with an Internet-enabled computer is likely to be able to access and contribute to a board.
Researchers planning to carry out interviews via discussion boards may wish to target an existing discussion board on a website or to create and moderate their own board for invited participants. Although there are particular ethical issues that must be considered where an existing board is used, there is likely to be less technical difficulty for the researcher, who simply requires access to a computer with an Internet connection. Creating a discussion board for the interviews, however, involves the use either of a software and hosting service or the installation of software on a server which the researcher has access to. Where a software and hosting service is used, the process is relatively straightforward from a technical perspective. The discussion board can usually be designed and managed through a simple interface on the website of the hosting service and the location of the board can be distributed to participants through sending the URL or adding a link to the board to any webpage. Options such as requiring a password for access and selecting threaded or flat boards are frequently offered, and it is often possible to sample the service through fully functional demonstrations for trial periods. Pricing for these services can vary, and most services charge monthly fees. A number of free services are available, although these frequently include advertising. In all cases, it is necessary to check that the privacy and data security [Page 430]offered is adequate for the research. In cases where the researcher has access to a server, it is possible to obtain and install discussion board software for use in the research. Again, prices vary and there are a number of free open-source examples as well as commercial packages. A listing of both software-only options and software and hosting providers is available from the following website: http://thinkofit.com/webconf/.
A wide range of software and services are available for synchronous interviews, including online chat facilities, ‘instant messaging’ and video-based technologies. Many of these services offer facilities for both individual and group interviews and allow for communication via text or via audio and video.
The rise of social media and social network sites (SNS) such as Facebook and Twitter have meant that chatrooms, a previously rich resource for online researchers, have become less important as recruitment sites. Instead, networks of users with shared interests are relatively easy to locate via SNS (Moore et al., 2015). Once participants have been successfully recruited, it is relatively straightforward to access a range of free ‘instant messaging’ services (WhatsApp, Facebook Messenger and Gmail Chat), which provide a more secure and appropriate platform for synchronous interviews.
The key advantage of these services over the free online chat providers is that instant messaging software can be used to set up chats specifically for interviews that can be limited to invited participants only and in which the researcher has a great deal of control over the discussion. One-to-one and group communication is possible with many of the services and automatic transcription is frequently available. A number of extra facilities such as file transfer and desktop sharing are often also available. All the services allow real-time text-based messaging and some also offer video conferencing and/or Internet telephony facilities. This makes audio and video communication possible where the researcher and participants have broadband Internet connections and the necessary equipment (webcams and/or microphones and speakers). The growth of these services along with the increase in the number, usage and availability of Internet telephony services such as Skype, which allows one-to-one and multi-user audio communication over the Internet, is making their use for audio interviewing increasingly realistic. In most cases, however, users of one type of instant messaging or Internet telephony software cannot communicate with users of a different type, and the researcher will need to ensure that all participants have the same software installed. It is also likely to be necessary to provide lists of minimum requirements for participants, such as a broadband Internet connection and any required peripherals.
There has also recently been a proliferation of commercial interviewing apps which can be downloaded onto smartphone and tablet technologies (for a review, see http://interviewingsoftware.com). Similarly, Moylan et al. (2015: 45) identify several useful websites that exist to help the online researcher keep abreast of emerging trends in technology. These include the ProfHacker blog on the Chronicle of Higher Education website (www.chronicle.com/blogs/profhacker), Bamboo DiRT (dirt.projectbamboo.org), Bamboo DiRT (dirt.projectbamboo.org), Mobile and Cloud Qualitative Research Apps (www.nova.edu/ssss/QR/apps.html) and the American Historical Association ‘Digital Toolbox for Historians’ (pinterest.com/ahahistorians/adigital-tool-box-for-historians). Further technical details can also be found in the ‘Exploring online research methods’ website (http://www.restore.ac.uk/orm/interviewsinttechnical.htm). A final useful and relatively new application is DragonDictate, which can automatically generate transcripts from audio recordings.
To conclude, we first reflected on the methodological progress of online interviewing before considering the future of online interviewing. Regarding methodological progress, although the data collected through synchronous and asynchronous online interviewing can be valuable to the researcher, we still urge that the potential of online research should not be exaggerated. Indeed, Hine's (2004) caution of a decade ago is still relevant today: ‘Internet-based research is no different from other forms of research. Just as we craft interviews appropriate for particular settings, so too we must learn to craft appropriate forms of online interview'. That said, it is clear that the data collected through online interviewing can be as rich and valuable as that generated during face-to-face interviewing. Indeed, some argue that the quality of responses gained is much the same as responses produced by more traditional methods (Deakin and Wakefield, 2014: 606). For example, the occurrence of pauses, repetitions and recasts under conditions of face-to-face and online interviews do not differ significantly (Cabaroglu et al., 2010). It must, however, be remembered that many of the issues and problems of conventional research methods still apply because as Kitchin (1998: 395) commented some time ago ‘…the vast majority of social spaces on the Internet bear a remarkable resemblance to real world locales'.
Online interviews can therefore be a useful additional tool for social researchers, but we would not suggest that this approach is appropriate for all types of research and neither do we suggest that online methods will ever replace face-to-face approaches to research. As Wilkerson et al. (2014: 569–70) illustrate, there are a range of decisions to be made in evaluating the respective advantages and disadvantages of online interviewing compared to face-to-face interviewing in relation to the specific topic that is to be investigated. At present, it appears that synchronous and asynchronous online interviews occupy a growing mainstream position in the world of social research. Increasingly, researchers who use online interviews adapt face-to-face research practices while also developing online specific practices. That said, even amongst those researchers who have successfully used online interviews, there can still remain some lingering scepticism surrounding their use. This is apparent in the continued use of face-to-face research to supplement and ‘verify’ data collected through online interviews (Orgad, 2005; Sanders, 2005; James and Busher, 2006). This approach weakens the position of online interviews because it suggests that they cannot stand alone as a research method. It also invalidates one of the main advantages of online research, which is the ability for researchers to expand the spatial boundaries of their research agenda without the traditional high costs this entails. However, although online researchers are still sometimes hesitant about the role of online interviews, their use has simultaneously become more mainstream, and a critical and reflexive stance towards these online methods is to be encouraged.
What, then, is the future for online interviews? Ever more rapid developments in the field of computer-medicated communications technology offer new and different media to the social researcher. Some of the issues discussed in this chapter relate to the lack of visibility during online encounters. The increasing use of VOIP (voice over Internet protocol) technologies such as Skype mean that online interviews are not solely restricted to text-based exchanges, but this does not mean that text-based online interviews have become an irrelevance, rather that the range of online interviewing formats have expanded, as have the associated issues with employing a visual online format. Similarly, new mobile technologies such as smart phones and tablets that are facilitated by increasingly available Wi-Fi Internet access enable the location of the interview to become ‘much more fluid and temporary’ (Deakin and Wakefield, [Page 432]2014: 609). The advent of a plethora of new applications available on the wireless Internet, and cloud-based computing, will have further implications for online interviewing (Van Doorn, 2013; Moylan et al., 2015). One significant issue is the production of ever more sophisticated Internet technologies and the rapidity of change in this sector that will present challenges to the online researcher, demanding that they become ever more contingent, flexible and innovative in adapting these technologies to produce high quality, nuanced online research methodologies.