The research question at the core of this study examined how people in Britain perceived the leaders of the three main political parties. To answer this question, we collected data from 14 focus groups held during and after the 2010 election campaign in the three nations of Britain. The Qualitative Election Study of Britain (http://www.wintersresearch.wordpress.com/qes-britain/) was the qualitative dataset generated from the transcripts of these focus groups. We used the grounded theory method to evaluate these data and examine how people viewed the leaders of the three main political parties. This method required reading and categorising the data multiple times to uncover different layers of patterns and codes. Each process of categorisation yielded categories and connections that were more substantive and analytical than the previous iteration. The grounded theory method was the appropriate method to examine popular perceptions of party leaders during the 2010 election campaign as it let us systematically uncover patterns and codes through a process of successive coding; we could analyse the data without preconceptions about what we would find. However, the method was not sufficient to provide deeper and nuanced representations of why people held such views. Therefore, we used discourse analysis to complement the grounded theory method and get the most detail from the data.
After reading this case, students will
- Be aware of the grounded theory method as a method for analysing qualitative data
- Understand why the needs of the research question might require multiple methods of data analysis
- Be introduced to the stages of using and applying the method
- Be mindful of the factors to bear in mind when deciding how and when to use the method
Was the dislike for Gordon Brown, the Prime Minister of the United Kingdom and leader of the Labour Party at the time of the 2010 general election, rooted in partisan politics? Was Nick Clegg, the leader of the Liberal Democrats (also called the Lib-Dems), quite as popular as the newspapers made him out to be? And was David Cameron, the leader of the Conservative party (also known as the Tories), successful in changing his political image in the eyes of the voting public? These were some of the many questions on the perceptions of party leaders that we examined using data from 14 focus groups held during and after the 2010 election campaign in all the major regions of Britain. The Qualitative Election Study of Britain (QESB) (http://www.wintersresearch.wordpress.com/qes-britain/) was the qualitative dataset generated from the transcripts of these focus groups. We used the grounded theory method to evaluate these data and answer questions concerning how people viewed the leaders of the three main political parties in Britain. This method required reading and categorising the data multiple times to uncover different layers of patterns and codes. Each process of categorisation yielded categories and connections that were more substantive and analytical than the previous iteration. While undertaking these iterative readings, we had to re-evaluate our prior conclusions and sometimes discard patterns that had arisen during the initial process of coding. The grounded theory method was the appropriate method to answer questions about perceptions of party leaders during the 2010 election campaign as it let us systematically uncover patterns in the data through a process of successive coding; we could analyse the data without preconceptions about what we would find. However, the method was not sufficient to provide deeper and nuanced representations of why people held such views. Therefore, we used discourse analysis to complement the grounded theory method and get the most detail from the data.
Here, we lay out the process of using the grounded theory method. Like any other research method, adopting this method of analysis requires attention to the following:
- identifying a clear research question and the required data
- identifying the appropriate method of analysis
- applying the research method
- evaluating the method and the findings
Identifying a Clear Research Question and the Required Data
The research question we wanted to answer was as follows: what are potential voters' perceptions (positive, negative and neutral) of the three main party leaders in England (plus the national party leaders for Scotland and Wales)? Our research question arose from two facts: first, the 2010 British general election would be the first time that each of the leaders of the three major political parties would lead their parties through a national election campaign, and second, three televised Leaders' Debates, in which these three leaders would participate, would be broadcast for the first time during a general election. We were curious to find out how these party leaders were perceived by the public at large and whether the Leaders' Debates would have any effect in altering or reinforcing these perceptions. In order to study these events as they unfolded, it was necessary to collect the data during the election campaign. We used qualitative methods to do so as we needed data that were rich in expressive detail in order to analyse participants' responses in depth and provide a credible answer to the research question. Data generated through quantitative survey methods would not have been suitable because survey instruments predetermine which issues are measured, and close-ended responses constrain both the manner and the extent to which participants can express themselves. We chose focus groups from the various qualitative methods of data collection for substantive and logistical reasons. Focus groups would give participants the opportunity to voice opinions arising from their own individual experiences and reflections; this method would also expose participants to the views of other people taking part in the focus group and give us insight into how participants justify their views to themselves and to others. For logistical reasons, focus groups were a better option than interviews or ethnographic methods as they let us involve a larger number of participants over a wider regional distribution within the limited time frame of the unfolding election campaign.
Identifying the Appropriate Method of Data Collection and Analysis
The type of data that are collected or available to answer the research question determine, to a large extent, what method of analysis you can use in the study. We conducted 14 focus groups in England, Scotland and Wales: nine before and five after the Election Day. These included three focus groups that deliberately coincided with the three Leaders' Debates during the campaign. Participants in the focus groups were chosen to be broadly representative of various age groups and gender. Participants were also screened based on their vote preference revealed through a preparatory questionnaire. (For more information on the design of the research that produced these data, read the companion piece to this article at: http://www.nova.edu/ssss/QR/QR18/carvalho88.pdf) The main sources of data for this analysis came from a focus group exercise that replicated a modified version of focus group research conducted by Rosie Campbell and Kristi Winters before the 2005 general election. In those focus groups, the participants wrote down words or phrases that came to mind in reaction to head shots of the party leaders (http://www.wintersresearch.wordpress.com/qesb-handouts/). After that exercise, the focus group moderator led the participants in discussions of their reactions. The researchers then used grounded theory method to analyse the data. We broadly replicated this research design, but made some modifications that we hoped would improve the quality of the information we could extract from the data. Participants were asked to write down their first thoughts about the leaders on viewing their photos. The photos we used in our study were taken from the party websites. We decided to use the website photos because they represented the image of the leader the party wanted to project. For purposes of comparability, we searched for head shots with neutral backgrounds as we wanted the participants to respond to the person, not the context in which the leader was in. The participants were also asked to mark which responses were the most important opinions and to classify each response as reflecting a positive, negative or neutral trait about the leader in question. This was a modification of the original research design, which asked participants to note those words or phrases which were most important to them when considering each party leader. Immediately following this brainstorming, the moderator of the focus group guided the participants through a semi-structured discussion wherein participants revealed their responses and the reasoning behind their categorisation of the responses as positive, neutral or negative traits. This was done to ensure that participants did not simply highlight the negative characteristics of the leaders during the discussion as was found in the study by Kristi Winters and Rosie Campbell.
Participants were also asked for their views on topical issues that had arisen during the campaign, such as the call for proportional representation by the Liberal Democrats or the expectation that the election would lead to a hung parliament. Participants in the focus groups held in parallel with the three Leaders' Debates were asked to reveal their opinions on the performances of the three party leaders in each debate. Finally, participants were encouraged (but were not compelled) to reveal who they thought they would vote for and to explain the reasoning behind these choices since the role of party leader could be an important consideration in vote choice. The focus groups produced data in different formats. Audio recordings for all focus groups were transcribed, and the transcriptions were used for data analysis. Sheets on which the responses to the written section of the brainstorming exercise on party leader attributes were transferred to an Excel sheet and categorised by participant, focus group and party leader. These responses were combined with the oral responses during the verbal section of the brainstorming exercise on party leader attributes and then transferred to NVivo for analysis. The narrative accounts of vote preference and vote choice were separated out from the transcripts using NVivo and analysed using a combination of NVivo and Excel. The data included words and phrases attributed to the party leaders in oral and written form during the brainstorming session, and verbal narrative accounts of vote preference and vote choice. We had the option of choosing data from these formats and applying appropriate qualitative methods to analyse them.
We did not form preconceived ideas about the categories and patterns that might be revealed in the data; however, we were aware of earlier work on the subject. Previous research has identified general categories of attributes for party political leaders. For example, Kristi Winters and Rosie Campbell found that participants' views of British party leaders in the 2005 general election could be organised according to the categories of likeability, competence and trustworthiness. Anthony King identified four qualities on which he speculated party leaders were assessed: their appearance, intelligence, personality and political style. Although grounded theory encourages data-led findings, it was still important that we were familiar with the existing research as it would facilitate comparison between our findings and previous research. Although replication is not the primary focus of qualitative research, in the case of the QESB, we set out to make a methodologically valid attempt to compare the Winters and Campbell findings to determine the stability of the leader assessment categories over two different elections and uncover the assumptions and context within which these assessments were made. We resisted the temptation to presuppose that prior classifications would be sufficient to structure the QESB participants' responses. Therefore, we chose the grounded theory method to help us analyse the QESB transcripts and let the patterns of responses and opinions emerge from the data. The method of data analysis we adopted facilitated the aim of our research: to uncover the ways in which party leaders were perceived by members of the general populace.
Applying the Research Method
What Is the Grounded Theory Method?
The Grounded Theory Method was developed by Barney Glaser and Anselm Strauss in 1967 and has had many different interpretations and applications since. The method allows the user to ‘engage with the data’ in ‘multiple iterations’: to read and reorganise the data in a continuous process of analysis and categorisation. (The grounded theory method is a method for data collection and analysis. However, we have used it largely for the latter purpose.) The theories that then arise can be said to be ‘grounded’ in the data. The Straussian version of grounded theory involves a three-stage coding process of the data which allows a researcher to uncover multiple layers (or dimensions) of information and meaning at every stage of the research – from the research question to the process of data collection and then to data analysis and drawing conclusions (or inferences if possible). The first step is ‘open coding’ whereby a comprehensive and detailed ‘questioning [of] the data’ is undertaken in order to highlight the general concepts or categories that emerge out of the data. Each concept and category is treated as a ‘code’ – an empirical element with a specific value. These codes are then broken down further into analytical and empirical indicators that are subjected to a constant process of evaluation till, as Carol Grbich notes, ‘a process of saturation is achieved and no new information is emerging regarding the properties of the category’. The second stage is ‘axial coding’. After identifying a number of codes, Strauss and Corbin recommend identifying and classifying links that exist between them. Identifying the relationships between the codes can make visible associations between them. Nodes that were coded during the open coding process were recoded into normative categories, that is, categories that encompassed normative concepts common to various ‘nodes’, a term used by the software NVivo when referring to codes. The third step is ‘selective coding’ which involves confirming the existence of the relationships that have emerged in the previous two stages by examining the categories that have been created and the data that have been included and omitted as well as existing literature on the topic and the data from the memos that were created during the open coding process. Memos are notes written by the researcher(s) coding the data that explain the reasoning behind the assignation of specific codes to the text as well as the researcher's impressions and opinions. Memos are useful in helping the researcher uncover any hidden layers of their own bias that might affect their research and provide justifications for coding that can be reviewed and analysed.
How Did We Use the Method?
As instructed, we began using the grounded theory method with the process of open coding. We used the software NVivo to undertake this analysis. First, we reviewed the leaders brainstorming exercise sheets and typing in each participants' exact words and whether it was coded as positive, negative or neutral. Words left uncoded were included with the neutral category on the logic that if the participant didn't code it as positive or negative, it represented a neutral, or at least ambivalent, reaction. For example, in the Leaders' Debate focus groups, these were the words and phrases that participants used and marked as positive when looking at Nick Clegg's photo: Has good team members, Does not take support for granted, Honest, Idealist, Greener, New, Open-minded, Looks like he empathises, Trustworthy, Calm, Sincere, Confident, Trust, Thoughtful, Underdog, Common sense and Peaceful. We also identified any passages in the 14 transcripts where the leaders were discussed, extracted them and read through them carefully to ‘get a feel for’ the data.
Once the initial data collection and reading was completed, the brainstorming data were read a second time; at this stage, we organised the positive, negative and neutral/uncoded responses into thematic blocks (these could range from a single word to small phrases) and coded them, that is, gave these blocks of text specific names that would describe and represent them. We allowed words or blocks of text to be organised with multiple codes to allow each block to fully represent the range of possible descriptive and analytical concepts it contained. Again using the example of the positive Nick Clegg codes from the Leaders' Debates groups, the following thematic blocks were identified:
Like ordinary people: Does not take support for granted, Looks like he empathises and Common sense.
Approachable: Open-minded, Looks like he empathises, Peaceful, Thoughtful, Calm, Sincere and Common sense
Calm: Peaceful and Calm
Good character: Honest, Trustworthy, Trust and Sincere
Leader: Has good team members, Confident, Calm, Peaceful and Common sense
Politics or ideology: Idealist, Greener, Underdog and New
Underdog: Underdog and New
These codes were then subjected to axial coding, where they were classified in multiple ways. In the first instance, we grouped together codes that described the positive, negative and neutral characteristics for the three party leaders. After this initial classification, these groups of codes were further grouped together and allocated to categories within each characteristic type – positive, negative or neutral. Tables 1 to 3 below display this reallocation for the coded responses on Nick Clegg by the participants in the three Leaders' Debates focus groups. The numbers in parentheses give the word count for each coded category or the number of times a particular response was identified in the transcripts. Since the QESB participants had determined what a positive, negative or neutral trait would be, occasionally, very similar words or phrases were coded under multiple categories. For example, many QESB participants indicated that a characteristic of Nick Clegg was that they did not really know much about him (we coded this as ‘who?’). However, this quality of being unknown was rated differently by participants: some rated this as a negative trait for Clegg while others rated it as neutral.
Table 1. Nick Clegg concepts coded as positive.
Table 2. Clegg concepts coded as negatives.
Table 3. Clegg concepts coded as neutrals and uncoded.
At this stage, we felt confident that the categories attributed to each leader followed the classification of personality, competence and trustworthiness uncovered by Kristi Winters and Rosie Campbell in their study. The categories were therefore further assigned to each of these concepts. Reflecting upon this final level of organisation, we felt that while it made sense to organise them in such a way, it was too ‘smooth’. That is to say, the terms people used in the category of leadership for Gordon Brown had different elements to them than Cameron's or Clegg's leadership terms. While we were happy to apply the same three general categories Winters and Campbell generated through their grounded theory analysis, we also wanted to preserve and highlight the unique, variegated feel that characterised each leader's more general category.
Using Discourse Analysis to Supplement the Grounded Theory Method
While the grounded theory method was useful to highlight the various patterns in the popular perceptions of party leaders, it was not enough to better understand the patterns in the wider context within which they occurred. The method was also restrictive in terms of examining the reasons behind these patterns and identifying the absence of certain concepts and categories from the data. To fill in these gaps and enrich the analysis, we used discourse analysis. Discourse analysis is a method applied to language in use. Margaret Wetherell wrote that the study of discourse is the study of human meaning-making. In particular, the work of John Paul Gee helped us in this area. To guide the second wave of our data analysis, we looked to his seven building tasks of language:
- how does language make certain things significant or not, and in what ways?
- what activity is a piece of language being used to enact?
- how is identity enacted through language?
- what relationships are enacted in a piece of language?
- what perspective on social goods does a piece of language construct (what is taken as right, normal or appropriate)?
- how does one piece of language connect to other things – making them relevant or irrelevant?
- how does a piece of language privilege or disprivilege certain ways of speaking (sign systems) or different beliefs and ways of knowing?
The first point, how language makes things significant, seemed very relevant to our research question. We were also interested in the way participants used language to construct an identity – in this case, not their own identity but rather how they used language when they perceived the identities of the various leaders. We sought to pay attention to the words, phrases and non-verbal intonations people used to express how they felt and thought about these leaders. We were particularly interested in the ways in which the terms people used were connected with each other. For instance, we were struck by the fact that so many respondents used the term ‘honest’ when discussing Nick Clegg. Not only did some respondents use the exact same word, but others used words that were similar in content such as ‘sincere’, ‘trust’ and ‘trustworthy’. Discourse analysis provided us with a method of grouping people's words and phrases into subcategories by considering the ways in which they were thematically similar. We were able to look at the data for the similar and different ways in which the identities of the leaders were constructed through connected words and phrases that represented normative concepts. We were also able to identify those words or codes which were most significant by examining them for how often they appeared in the brainstorming exercise. The logic of our significance criteria went like this: the more often a word or association appeared independently across people's silent brainstorming word associations, the more prevalent or powerful that association was. Therefore, the terms and characteristics that came up the most often provided an insight into the identity features for each leader that were predominant and were expressed more often by our participants than other features. By way of example, instead of eliminating Nick Clegg's category of ‘Calm’ and placing it within the more general category of Leadership, we maintained it as its own subcategory within the Leadership category (see Figure 1). In this way, we could link our participants' perception of Clegg as ‘calm’ to his general leadership qualities as well as highlight that his perception of being a calm leader was a dominant category within the leadership attributes assigned to him by our participants.
Figure 1. Structure of the leader evaluations of Nick Clegg.
These concepts were further classified into the threefold categorisation of British party leaders uncovered by Kristi Winters and Rosie Campbell (discussed earlier): likeability or personality traits, competence or leadership and trustworthiness. To better understand and communicate how these categories connected with each other, a visual representation of these conceptual categories was devised. Each discourse-informed category was assigned a shape: rounded rectangles for leadership qualities, circles for personality and rectangles for trustworthiness. The number of times each concept was mentioned was used to determine the size of these shapes. For example, concepts with only three mentions are displayed in 8-point font while the most frequent concept, ‘Poor leader’, is displayed in 26-point font. Symbols indicate whether the category is positive (+), negative (−) or neutral/not coded (∗). Figure 1 displays this representation for the attributes assigned by the QESB participants to Nick Clegg.
What Were Our Findings?
The above process was undertaken for each of the three party leaders, and the findings were then evaluated across categories and across leaders. We found that participants saw Gordon Brown as a man who was trying hard, but descriptions and discussions of him lacked any association with effective or successful governing or leadership. David Cameron received mixed reviews from the participants, but people used positive terms to assess his leadership ability, and perhaps most importantly, they did not think of him as a failure or as ineffectual. Nick Clegg was perceived as being honest and trustworthy – not one participant characterised him as deceptive or untrustworthy. In addition, no participant characterised Clegg as arrogant, smug or pompous; by contrast, Brown and Cameron were characterised as such. However, these positives traits were offset by the lack of information voters had about him, which resulted in few positive comments about his leadership abilities and a general sense of scepticism. Unlike Brown and Cameron, Clegg was also described as being an unknown figure, right up till the third Leaders' Debate, by which time Cleggmania was in full bloom. Clegg was variously described as ‘bland’, an ‘unknown quality’ and ‘vague’. Some participants wrote the word ‘who’ in their responses to his photograph. His leadership qualities were affected by this perceived lack of information about him with participants responding right through the campaign with phrases such as ‘amateur’, ‘weak’, ‘not a strong personality’ and ‘talks sense but not a credible leader’. Compared to the other two leaders, participants viewed him through his personality and perceived trustworthiness and commented on his lack of experience and leadership.
Evaluating the Method and Findings
The method and the findings from this research were evaluated against criteria of validity and reliability. We examined whether the method used was appropriate to use with the available data. The application of the method was also compared to that suggested by its original developers. As explained earlier, our application of the method met both criteria. The method was also evaluated for the scope and strength of the findings that could be generated. As with most qualitative research, the scope of generalisation of the findings is limited. However, using the method helped us gain insights into crucial questions and raised issues for further research. The findings were also evaluated against previous literature. As noted above, we found the categories that emerged from the QESB data were similar to those proposed by Kristi Winters and Rosie Campbell in a similar study. This gives us additional confidence about the strength of our findings. Finally, we felt we made an important methodological contribution to data-led analysis of political party leaders. By identifying that grounded theory method was helpful but insufficient to completely represent the meanings and values we saw in the data, we were able to find a method of analysis that could remedy these shortcomings. By applying discourse analysis, we could preserve and visually represent the unique, variegated associations that our participants created through their use of language. To learn more about the leader evaluations produced from this research, read our recent publication in The Qualitative Report (http://www.nova.edu/ssss/QR/QR18/carvalho89.pdf).
Exercises and Discussion Questions
- What kinds of research questions and available data would the grounded theory method be best suited to?
- What might be the differences in using focus groups and interviews in the collection of data for qualitative research? Do you think the data gathered be of vastly different character?
- In your view, which potential sources of methodological bias could affect the application of the grounded theory method? What might be done to address these sources of bias?
- What are the limitations of using the grounded theory method? How can these limitations be addressed methodologically?