A
Critical analysis, within the interpretive paradigmatic stance, of a
paper by Sammons et al. (2006) investigating variations in teachers’
work, lives and their effects on
pupils
Annie
Fisher
It
seems that for some time there has been a tension in the academic
world about ways in which to judge the quality of research,
particularly qualitative inquiry (see, for example, Furlong &
Oancea, 2005, 2007; Hammersley, 2007, 2008; Hillage et al., 1998;
Klein & Myers, 1999). Following sustained criticism in the late
1990s by, for example, Hillage et al. (1998) and the Tooley and
Darby Report (1998), all of whom suggested there existed a
widespread concern about its quality and poor value for money
(Hargreaves, 1996), there has been a proliferation of frameworks
which propose criteria by which to evaluate its quality. This work
analyses Sammons et al.’s (2006) paper and attempts to ascertain to
what extent it may be counted as ‘good research’, particularly when
viewed through the stance on ‘quality’ taken by Klein and Myers
(1999) and Hammersley (2007, 2008). Further, it also attempts to
demonstrate that the analysis has offered some lessons on the
conduct of interpretive research.
Sammons
et al.’s (2006) complex paper reports on a study commissioned by the
DfES (1999) into teacher effectiveness. The VITAE (Variations in
Teachers’ Work, Lives and their Effects on Pupils) project was a
longitudinal investigation which drew on a purposive sample of 300
teachers from 100 schools across 7 local authorities which was
intended to represent both the national teacher profile, and that of
schools; issues relating to the selection of teachers will be
discussed later in this assignment, however the authors, in a
further paper (Day et al., 2006:102), state “the results indicated
that the final sample was indeed representative”. From the data
collected through the use of mixed methods, Sammons et al. defined
six ‘professional life phases’ of teachers, then, using categories
of perceived identity, motivation, commitment and effectiveness,
participants were further categorized into sub-groups with defining
key characteristics, such as self-image. From detailed iterative
examination of the data, four scenarios were finally drawn up; this
allowed the researchers to situate teachers in relation to the
dominant influences upon their professional lives at that
time.
According
to the abstract, the authors purport to investigate variations in
teachers’ work and lives, and their effects on pupils. There
is, however a contradiction between the abstract, and the first page
of the article itself, which claims it will “describe and analyse
influences on teachers’ professional and personal lives” (p.
682). Whilst these two aims may not be mutually exclusive, they are
not the same. This investigation appears to link directly to the
Hillage Report (1998:15) of the previous year, responding to a
concern expressed by policy makers and practitioners as “job
satisfaction, morale and motivation among teachers and their impact
on pupil performance” (p. 15); this is clearly closer in spirit to
the aim expressed in the abstract. The overarching purpose, drawn
from the original bid, appears to be an attempt to understand how
teachers become more effective over time. This assumption is
questionable, and the researchers themselves conclude by
acknowledging that this is not necessarily the case. What they do
suggest is that there are a number of complex factors which may
influence teachers in different phases of their work; these, in
turn, impact on performance.
A
range of methodological literature (for example, Denscombe, 2003;
Flick, 2006; Silverman, 2000, 2006; Wellington, 2000) suggests that
conducting interpretive research is inextricably bound up with
collecting multiple versions of ‘reality’ or ‘truth’. Understanding
methodological approaches is, however, not straightforward. Pring
(2000: 43) refers to the perceived difference between qualitative
and quantitative research as “false dualism”, cautioning
that making distinctions between ontological debates on the nature
of reality, and the epistemological notion of different sorts of
truth, is also problematic. Wildemuth (1993), however, suggests the
argument regarding the relative merits of contrasting paradigms is
sometimes clouded by a focus on methods, rather than on the
underlying ontology and epistemology, whilst Crotty (2003)
believes the distinction to be concerned with
methods, rather than epistemology. Sutton (1993) offers a
deceptively simple answer, suggesting it is the relationship between
the researcher and the researched which is defining: the objective
researcher focuses on the respondent in order to understand
objective reality. In qualitative research, which views reality as
subjective and socially constructed, the subjective researcher
contextualises the question in order to understand it.
In
an increasing blurring of paradigmatic features (Denzin and Lincoln,
2003; Schwandt, 2000), the multidimensional phenomena used in this
study make it difficult to situate within pure positivist or
interpretive enquiry. As Hammersley (2007:293) posits, there is a
“complex landscape of variable practice” in interpretive research.
Perhaps Denzin and Lincoln (2005: 4), in their oft-cited
introduction to qualitative research, come closest to explaining the
approach taken as ‘bricolage’: the piecing together of “a set of
representations that is fitted to the specifics of the situation”;
although Tashakkori and Teddlie (2003: x), in the preface to the
Handbook of Mixed Methods claim that mixed methods research is now a
“separate methodological orientation, with its own worldview”, and
that it sits within a pragmatist paradigm.
The
notion of pragmatism in the context of this study is interesting;
not least because Robson (2003) locates such an approach centrally
in the ‘what works’ agenda (p.43). Truth, he suggests, is seen as
‘what works’; the question being how feasible it is to conduct a
study using both qualitative and quantitative methods concurrently.
Reichart and Rallis (1994:85) suggest that the fundamental values of
both approaches are, indeed, complementary, citing “the
value-ladenness of enquiry and the theory-ladenness of facts”. This
is endorsed by Cresswell (2007:22) who argues that researchers with
a worldview which focuses on outcomes, and a “concern with
application (what works)” are likely to select mixed methods to
pursue a line of enquiry precisely because they are not committed to
any one definitive system of reality and philodophical approach.
Truth may be viewed as what works at the time, and the preoccupation
with ‘what is reality?’ should be regarded as irrelevant. As Denzin
and Lincoln (2003) posit, arguments about the scientific superiority
of one research method over another are perhaps spurious. In a
philosophical approach based on pragmatism, links are sought between
theory and praxis with the core reflection connected to
“manipulating the social factors in a given context” (p.147). It is
possible to identify this underpinning in Sammons et al.’s list of
“implications for policy making” (p. 699) offered at the conclusion
of the paper, and appears congruent with the notion of validity as
“warranted” (Denzin and Lincoln, 2003:147). Notwithstanding the
proliferation of claims regarding the benefits of mixed-method
studies, there are acknowledged problems; for example McEvoy and
Richards (2006) suggest there is considerable scope for
confusion due to the complex ontological and epistemological issues
that need to be resolved.
At
a pragmatic level, Moffat et al. (2006) refer to the possibility of
obtaining different and conflicting findings.
The
research design comprised an ‘extensive’ literature review of the
current position on teacher effectiveness, and a longitudinal study,
using mixed methods, to investigate pupil and teacher voice, and
multi-level statistical analysis of value-added data related to
performance using SATs scores. Denzin
and Lincoln (2003:147) suggest that in this approach to research
“the research logic is constituted in the inquiry process itself,
and it guides the knowledge generation process”. Here, the
quantitative use of qualitative data and qualitative use of
quantitative data was intended to allow a further layer of
interpretation, and presumably to provide rich data for analysis. As
Reichart and Rallis (1994:85) suggest, “reality is multiple,
complex, stratified”, adding that any particular set of data may be
explained by more than one theory. This methodological approach also
has links with Denzin’s
(1970) ‘between method’ triangulation where different methods are
employed in relation to the same subject, and Faulkner’s (1982, in
Wellington, 2000) notion of ‘triads’ in which enquiry rests on three
‘legs’, each of which represents a separate method of data
collection; for example interviews, observation, and scrutiny of
documents.
The
researchers situate themselves within a broadly interpretive,
constructivist epistemological framework, beginning with a broad
research question; moving on to establish methods of systematic data
collection and developing strong triangulated measures to create a
series of multi-dimensional case studies. Creswell (2007) defines
case study as:
“a qualitative approach in
which the investigator explores… multiple bounded systems (cases)
over time, through detailed, in-depth data collection involving
multiple sources of information(e.g. observation, interviews,
audiovisual material and documents and reports) and reports a case
description and case-based themes” (Creswell, 2007,
p.73).
In
creating a picture of variations in teachers’ lives and the
resulting affect on pupil attainment, the project investigated
perceived teacher effectiveness through the use of questionnaires
and participant interviews; methods usual to interpretive case study
(Creswell, 2007; Kelliher, 2005; Winegardner, 2000). Traditionally,
interviews have been regarded as an opportunity to probe
understandings, although they are inevitably bound within the
participants’ own constructions of reality. Recent radical criticism
of the use of such data (Murphy et al., 1999, in Hammersley 2007)
suggests, however, interview data cannot provide a sound source of
information, or be used to generalize. According to Hammersley
(2007:299), a constructionist approach suggests that it is mistaken
to assume that research can offer any “superior knowledge of
reality”. In this study, we are not informed if participants
understood that their effectiveness was being judged, and are not
provided with an example of the questions asked; neither are we told
whether the pupil and teacher data was accorded equal weight. It is,
therefore, difficult to make an informed judgment about the
construction the researchers placed on the data, or what might have
been eliminated from the analysis.
Relative
teacher effectiveness was measured through statistical analysis of
value-added data. This raises some concerns: firstly, although we
are told that adjustments were made for pupil background, there
seems to be no acknowledgement of pupil mobility and absence;
similarly, there appears to be none of teachers’ absence, or
experience of teaching SATs classes. It is not evident if the
difference between subject-specific teaching in year 9, and teaching
across the curriculum in years 2 and 6, was considered a variable.
According to Day et al. (2006) teachers were designated as ‘maths’
or ‘English’ practitioners. Clearly in both Key Stage (KS) 1 and 2,
this is a misconception: teachers in the primary phase are required
to teach the full range of curriculum subjects, regardless of
original area of specialism, expertise or interest. It might have
provided a clearer picture of ability to raise attainment, and a
closer match with the picture in secondary classrooms, if primary
subject co-ordinators had been selected to participate. Neither was
it possible to ascertain in what way the qualitative data obtained
from pupil questionnaires contributed to the process of mapping and
analysis. At this point, however, it is important to note that some
criticisms of the research, for example a failure to explain
clearly the precise way in which the data were
combined, are addressed in more detail in a further paper (Day et
al., 2006) which elaborates the ‘methodological synergy’. For the
purpose of this assignment, however, reference has only been made to
Day et al. when it serves to illuminate a particular
point.
This
collection of data from multiple sources, using multiple methods,
was inevitably ‘messy’, and the researchers acknowledge various
challenges, including the lack of complete data sets. Perhaps a
collaborative action research project (Greenwood & Levin, 2005,
in Denzin & Lincoln, 2005) would have allowed participants to
feel more involved, and led to a lower drop-out rate. The use of
standardised tests (similar to optional SATs) would have allowed
data to be drawn from a wider sample of years.
The
central question raised by recent assessment frameworks (Furlong and
Oancoa, 2006; Hammersley, 2007, 2008; Klein and Myers, 1999) is how
research projects may be categorized in order to judge them against
an appropriate set of criteria. As Hammersley (2008) argues, this is
not a straightforward process. Sammons et al.’s work, arguably, may
be considered to be situated within Furlong and Oancoa’s (2006: 9)
category of “applied and practice-based research” which links policy
makers, practitioners and a variety of interest groups, in a new
contract; assessment of such studies, they suggest, will need to be
“multi-layered and multi-dimensional” (p.10). Hammersley (2008),
however, finds this an unhelpful definition, and somewhat
disingenuous in that it fails to acknowledge the rise of educational
accountability. He goes on to suggest the term “practical research”:
funded inquiry carried out by (perhaps) someone other than a
practitioner, with the aim of producing knowledge which informs
practice. Hammersley considers that Furlong and Oancoa’s “applied
and practice-based research” may be distributed between “practical
research” and “inquiry subordinated to another activity” (p.752).
According to Hammersley, in practical research, we need to ask
whether it is relevant and valid; in subordinated inquiry, the only
question to be asked is “whether it facilitates the activity it
serves in whatever way” (p. 752).
It
is hard to judge exactly where to place Sammons et al.’s research,
since it aims to improve practice, but is subordinated to the school
effectiveness agenda. Any judgment of validity is also problematic
since this is a contested notion in interpretive research; Guba and
Lincoln (1989) for example, argue that ‘authenticity’ is a more
appropriate term, and Maxwell (1993) adds that validity comes from
accounts, rather than data and methods. If validity asks whether the
research measures what it intends to measure, then the findings are
of no help to the DfES, since they do not indicate “how teachers
become more effective over time” (p. 682); they do, however, measure
the factors which contribute to effectiveness, if effectiveness is
judged to be an ability to raise scores.
If
Klein and Meyers’ (1999) summary
of principles for interpretive field research is applied to this
study, it appears that it meets some of the criteria for efficacy.
They state that ‘good’ research needs to follow the fundamental
principle of the hermeneutic circle: that human understanding is
dependent on a process of constant iteration. The researcher,
therefore, moves in and out between data sets to note independent
meaning situated both within small parts, and the recreated meaning
in the whole. In Sammons et al.’s study, the linking of qualitative
and quantitative data from a number of sources clearly demonstrates
an attempt to arrive at a new understanding. The qualitative data
were analysed and categorised, presumably through a process of
coding, using Nvivo software. Through its capacity to import a range
of data, this allows connections to be made; categories created and
resorted in the light of new data; indeed, the literature review was
extended as the study progressed. The hugely complex process of
identifying themes and dimensions and scenarios was described in
some detail. Literature suggests a number of iterative approaches to
data categorisation, for example discourse analysis, conversation
analysis and Grounded Theory (Glazer and Strauss, 1967; Strauss and
Corbin, 1994). Although the researchers refer to their phases “being
grounded in our empirical data” (p. 685), having begun with a
comprehensive literature review, it is difficult to see how for
categories were able to ‘emerge’ inductively through the process of
axial coding. Inter-coder reliability is not
acknowledged. It was also not made clear how pupil views, and the
ontology of the researchers,
influenced the process. Although principles of the hermeneutic
circle were followed, the circle was not completed. This principle
is fundamental to all others, since it is the basis for building new
understandings.
The
second interdependent principle of contextualization requires a
thorough, critical reflection of the social and historical
background of the research setting. This is intended to allow the
audience to understand the genesis of the current position, and to
locate it within an easily interpretable context. Firstly, although
the authors assure us that the literature review was thorough and
extensive, the paper itself makes detailed reference to few major
studies (Huberman, 1993;
Kelchtermans, 1993); this fails to provide a broad,
researched-based, historical overview. Indeed, Sammons et al. (2006)
provide the briefest of contextual information; although they
explain that the study was conducted as part of a DfES investigation
into teacher effectiveness, they do not acknowledge that this sits
within a school improvement culture, and is located firmly in the
government’s ‘what works’ research agenda. According to Hammersley
(2008:750) this rise of “an evidence-based practice movement” is at
the heart of increasing political control and challenge to the work
of professionals – presumably both practitioners and researchers
alike. Although Sammons et al. refer to the danger of teachers
feeling they were ‘judged’ (p.685), they fail to place this in the
context of Threshold Assessment: teachers have been required for
some time to provide documentary evidence of the way in which they
have raised pupil attainment. Although Bryman (2002) suggests that
the use of mixed methods allows relationships to be established
between macro and micro levels, Hammersley (2007:291) cautions,
however, that it is not possible for research to be made fully
accessible to the reader because of “the situated nature of the
judgments” which transcend any framework for evaluating quality.
Klein
and Meyers’ third principle, that of interaction between researchers
and subjects requires further critical reflection on the social
construction of meaning; it asks the researcher to acknowledge
the way in which this interaction inevitably co-constructs
the data. Bryman (1992) argues that fixed design research provides
the opportunity to probe the ‘structural’ aspects of the social
world, whilst flexible design is more effective in aiding
understanding of processes: combining the two allows both to be
interpreted. The attempt to make a creative use of quantitative data
qualitatively, and qualitative data quantitatively, has, in this
study, led to the creation of a socially constructed meaning; the
findings, however, are presented as ‘truth’. As interaction was
controlled by the researchers, and pupil views were neither fed back
to teachers, nor discussed within the report, it suggests that the
construction placed upon the data is that of the researchers alone.
According to Mason (1995), the interpretive researcher
can engage in a misguided search for a single truth through
the use of methodological triangulation.
Sammons et al., in seeking
to address validity and reliability, and reach a ‘true’
understanding of the situation, have used a process of
‘overlaying’ to seek the intersection of data.
Literature suggests (Flick, 2006; Webster and Mertova, 2007),
however, that triangulation is not a tool of validation, but an
alternative to it; the real test of validity, they suggest, is that
readers find the account ‘believable’. As
Mathison (1998:13) posits, “there are three outcomes that might
result from a triangulation strategy….convergence, inconsistency,
and contradiction". This study seeks
convergence.
The
principle of abstraction and generalization requires the application
of principles one and two (constant iteration to reach
understanding, and factors specific to the context) to the
interpretation of ideographic data: that specifically relating to
the case under study. In turn, this is related to the general
‘nomothetic’ concepts that describe the nature of human
understanding and social action, in order to generalse to the
big picture. Specificity is achieved in this report through an
effective process of iteration to build new categories, concepts and
understandings. Principle two is also applied, but to a lesser
extent. The data is specific to the case study schools, and to year
groups in which SATs are undertaken; the ‘what works’ agenda is not
made clear to those unfamiliar with the English education system
with its current preoccupation with testing and targets. There is,
however, careful consideration of nomothetic generalization to the
big picture in the key suggestions which conclude the article, and
refer for example to the need for focused CPD.
Klein
and Myers’ (ibid) principle of dialogical reasoning calls for
sensitivity to possible contradictions between the theoretical
perceptions which may have informed the research design, and the
story which is told by the data. Kellagher (2005) suggests
reliability may be achieved through ‘dialogical reasoning’ by
keeping a reflective diary. There is no indication that the
researchers did this, but there is evidence of reconsideration in
the discussion of the initial preconception that teachers become
more effective in raising pupil performance as they gain experience.
Due to a reduced teacher response rate, the final cycle of research
was, however, ineffective in providing further insights which might
have refined or altered perceptions. The key finding, that
teachers do not necessarily become more effective over time, is a
moot point. What needs further investigation, if the DfES are to
receive an answer, are at least four other possible interpretations
of the data. Firstly, whether these particular teachers have always
been less effective than their colleagues; secondly, if they have
either not been offered, or have avoided, support; thirdly; if
primary generalists have the same capacity in personal subject
knowledge per se, and pedagogical subject knowledge, to raise
attainment in both English and maths, unlike their specialist
secondary colleagues; fourthly; if it is possible to be effective –
if we define effectiveness as the ability to raise levels of
academic attainment - with every child in every cohort of
children.
It
is essential for the researcher to be sensitive to possible
differences in interpretations among the participants. When working
with multiple versions of the cases under study, differences are
inevitable, since these are built on participants’ views of truth
and knowledge; this sensitivity is a key feature of Klein and Myers’
principle of multiple interpretations. Sammons et al. work
skillfully with multiple narratives, and acknowledge a wide range of
factors impacting upon them. The quantitative use of qualitative
data and qualitative use of quantitative data was intended to add a
further layer of interpretation to this; however, although pupil
voice was mentioned in the overview of the project, their
perspective does not appear to be acknowledged. The final principle,
that of suspicion, requires researchers to be alert to, and to
acknowledge, possible ‘biases’ and systematic ‘distortions’ in the
narratives collected from participants: there are always other
possible constructions of the data. There is no acknowledgement of
bias, and the construction that each of the participants places upon
their own work and life experience is similarly unquestioned, and
taken as ‘truth’. Distortions from incomplete data sets, and the
focus on SAT classes, are, however,
acknowledged.
Malone
(2003) draws attention to the inherent nature of ethical dilemmas in
all educational research, and literature is clear that
situation-appropriate principles of action (see for example, Crotty,
2003; Malone, 2003; Pring, 2000, 2001; Small, 2001) need to be
developed to avoid “moral relativism” (Wellington, 2000,
57). Ethical
considerations in this paper are dismissed with one brief sentence;
there is no mention of obtaining informed consent from participants,
nor of methods of data storage, although Nvivo does provide an
element of security by storing database and files together. In an
inquiry of this highly sensitive nature, one might expect more
acknowledgement of the issues; the use of pupil
views and SAT scores to judge teacher effectiveness, for example,
are not given as much attention as might be expected. Although Pink
(2004, in Silverman, 2006) suggests that consulting informants on
their view of the analysis not only facilitates their reflection on
initial informed consent, but also offers further insight into the
data, given the sensitive, and highly personal, nature of the data
collected, this would not have offered a solution.
If
professional judgment is central to educational practice, and
judgments are essentially moral in nature, then, as Biesta (2007)
suggests, education may be considered to be a “moral, non-causal
practice”; decisions about ‘what works’ in improving teacher
effectiveness should be set alongside considerations of what is
desirable for the participants. The researchers suggest the reduced
teacher response rate in the final year of the project might have
been due to lack of feedback on the value-added assessment; this
does lead to the question of what the teachers felt they gained from
participation, and if this was simply a case of being ‘done to’.
Sammons et al. acknowledge the limitation of their study in
conducting no observations of teaching. In the current educational
climate, inspection is moving from a joint focus on data and
classroom observation, to one which privileges data and
samples teaching. It is arguable, however, that if we seek to
understand if teachers’ espoused practice equates to their enacted
practice; and if we wish to ascertain what achievement looks like,
then it is necessary to add a further layer to data collection: that
of classroom observation.
In
defining the fourth generation of research, Guba and Lincoln
(1989:8) state that findings are not facts, in the ultimate sense,
but created through in interactive process; outcomes are not ‘the
way things really are’ or ‘true’, but “represent meaningful
constructions that actors, or groups of actors form to ‘make sense’
of the situations in which they find themselves”. The conclusions
drawn from this research are, inevitably bound in the constructions
placed upon them by the researchers; in the versions of ‘truth’ that
teachers and pupils alike offered, and in the partial sets of
quantitative data. Activism, according to Hammersley (2007) requires
that another factor is added to the criteria for judging the quality
of research. Beyond questions of epistemology lies “the relationship
of research to politics, policymaking and practice” (p.299); the
ultimate question here might ask if the research provided value for
money. Denzin and Lincoln (2005) are clear that, in America, the
scientifically based (SRB) research movement which has grown in the
wake of the No Child Left Behind Act of 2001 has created a “hostile
political environment for qualitative research” (p. 8). Although
they go on to argue that in mixed methods enquiry, qualitative data
is often accorded lower status, Day et al. (2006) were clear that in
the ‘synergy’ of their methodological approach, neither method was
accorded dominance; both sequential (findings from one method were
elaborated by another) and parallel (different methods of data
collection occur at the same time) strategies were applied.
Interestingly,
in the light of Klein and Myers’ framework (1999), Bloch (2004, in
Denzin and Lincoln, 2005:9) asserts that “the NRC [National Research
Council] ignores the value of using complex, historical, contextual,
and political criteria to evaluate inquiry”. The lack of ‘hard
evidence’ provided by qualitative research has, in this era of
increasing political manipulation of research, led to the growth of
mixed-methods experimentation: this can prevent participants from
having active participation in the process. As Howe (2004:57)
eloquently argues, the number of “rather influential” researchers
who have aligned themselves to this (mixed) method of research might
be reacting to “the perceived excesses of postmodernism”, or simply
complying with the strictures of the current educational and
political climate.
There
are a number of important lessons for the novice researcher to learn
from an analysis of this ambitious project. Firstly, the reality of
research will always exceed our crude attempts to label it, and its
outcomes are shaped by the values and beliefs that researchers bring
to the situation: this should be acknowledged. Secondly, even for
the most experienced researchers, epistemology and methodological
approaches are inevitably compromised to a certain extent when a
government-funded, ‘what works’ agenda acts as the driver. Thirdly,
the publication of edited findings presents a partial picture; it
may well be that pupil views were factored in to the analysis, and
that ethical considerations were rigorously discussed with
participants, but an abridged summary prevents us from knowing this;
it was necessary to read the Day et al. paper of 2006 to gain a
greater understanding of both the research design and the
methodology, Fourthly, if quantitative data is to be used to add a
rich layer of information, and generate new insights and
understandings (Fry et al., 1984), it must be questioned if the two
types of data are really able to capture the same phenomenon, or
should be used instead to describe different facets of the case in
question to avoid ‘befuddling’ the audience though the process of
integration. Finally, the qualitative data by itself provided
a breadth and depth of material which could be used to improve
teacher effectiveness – if this is quantified as a sense of positive
professional identity; a capacity for resilience, and a continuing
commitment to the profession. I note that the VITAE report in the
paper by Day et al. (2006:104) cites not only cognitive, but also
“affective” effectiveness.
Although
an
essentially constructivist and subjectivist epistemology appears to
underpin this enquiry, the use of mixed methods means that is not
possible to locate it precisely within any particular paradigmatic
and epistemological framework. The qualitative interpretation of
quantitative data, search for links between theory and praxis, and
focus upon outcomes for policy making, suggests a pragmatic
position, although this is unacknowledged by the authors. Tashakkori
& Teddle (1998:21) suggest that for pragmatists, the method is
“secondary to the research question itself”. Paradoxically, they go
on to state emphatically that pragmatists believe that, whilst there
might be causal relationships between social phenomena, we can never
be precise about them; if this is the case, then the research
question seems unanswerable!
Perhaps
the most significant factor which prevents my interpreting Sammons
et al.’s paper as ‘good research’ is that, although one paper alone
cannot tell the full story, I have not been able to locate evidence
of an acknowledgement of the influence of their values and beliefs;
the constructedness of their interpretation of the data, or that of
participants. Research is inevitably concerned with collecting
multiple versions of ‘reality’ or ‘truth’, and the addition of
quantitative data does not make it less so.
In
postmodern research, a
crystalline (rather than triangulated) approach, according to
Richardson (1997, in Denzin and Lincoln, 2003) allows for shifts,
changes and alterations in focus in an interweaving of “discovery,
seeing, telling, storying and representation” (p.280) perhaps this
offers a clearer picture of the process an interpretive researcher
experiences as she struggles to present a representative rather than
‘accurate’ picture of the subject. As Silverman (2000:100) cautions,
if we conceive reality as socially constructed, and
context-dependent, then no one ‘phenomenon’ can be applied to all
cases to provide a definitive and
objective explanation; perhaps, as he
suggests, “simplicity and rigour” are preferable to “an illusory
search for the ‘full picture’”.
5031
words
References
Biesta,
G. (2007). Why ‘what works’ won’t work. Evidence-based practice and
the democratic deficit of educational research. Educational
Theory,
57.1, 1 –
22.
Bryman,
A. (1992). Quantitative and quantitative research: further
reflections on their integration. In Brannen, J. (ed.), Mixing
Methods: Quantitative and Qualitative Research. Aldershot:
Avebury.
Cresswell,
J.W. (2007). Qualitative Inquiry & Research Design: Choosing
among Five Approaches. London: SAGE..
Crotty,
M.
(2003). The foundations of social research: meaning and
perspective in the research process. London.
SAGE
Day, C.,
Sammons, P., Kington, A. and Quing, G. (2006). Methodological
synergy in a national project: the VITAE story. Evaluation and
Research in Education, 19.2, pp. 102 – 125.
Denscombe,
M. (2003). The good research guide (2nd Ed.)
Buckingham: OUP.
Denzin,
N., & Lincoln, Y. (eds.) (2003). The landscape of qualitative
research (2nd Ed.) Thousand Oaks.
SAGE.
Denzin,
N., & Lincoln, Y. (eds.) (2005). The SAGE handbook of
qualitative research. Thousand Oaks. SAGE.
Flick, U.
(2006). An introduction to qualitative research. London:
SAGE.
Fry, G.,
Chantavanich, S., & Chantavanich, A. (1981). Merging qualitative
and quantitative research techniques: towards a new research
paradigm. Anthropology and Education Quarterly, 12.2,
pp. 145 – 158.
Glaser,
B. & Strauss, A. (1967). The discovery of grounded theory:
Strategies for qualitative research. Chicago: Aldine.
Guba, E.
& Lincoln, V. (1989). Fourth generation evaluation.
London: SAGE.
Hammersley,
M. (2008). Troubling criteria: a critical commentary on Furlong and
Oancea’s framework for assessing educational research. British
Educational Research Journal, 34.6, pp. 747 –
762.
Hammersley,
M. (2007). The issue of quality in qualitative research.
International Journal of Research & Method in Education,
30.3, pp. 287 – 305.
Hillage,
J., Pearson, R., Anderson, A., Tamkin, P. (1998), Hillage Report:
Excellence in research on schools, Institute For Employment
Studies. London: DfEE. Research Report RR74, Vol.
RR74
Howe, K.
(2004). A critique of experimentalism. Qualitative Enquiry,
10, pp. 42-61.
Moffatt,
J., Howel, D., Mackintosh, M., White, S. & Reeve, M. (2006).
Using
quantitative and qualitative data in health services research – what
happens when mixed method findings conflict? BMC
Health Services Research, 6, Online,
available from: http://www.medicine.usask.ca/family/research/manuals/mixed-methods-approaches/when%20mixed%20methods%20produce%20conflicting%20results.pdf
[accessed
10th January 2009]
Kelliher,
F. (2005). “Interpretivism and the Pursuit of Research
Legitimisation: An Integrated Approach to Single Case Design” The
Electronic Journal of Business Research Methodology,
3(2),123-132, accessed on 20/12/08
Klein,
H., & Myers, M. (1999). A set of principles for conducting and
evaluating interpretive field studies in information systems. MIS
Quarterly, 23.1, pp. 67 – 93.
McEvoy,
P. & Richards, D. (2006). A Critical realist rationale for using
a combination of qualitative and quantitative methods.
Journal
of Research in Nursing,
11.1, 66-78.
Malone,
S. (2003). Ethics at home: informed consent in your own backyard.
Qualitative Studies in Education 16.6, 796 – 814.
Mason, J.
(1996). Qualitative researching. London:
SAGE.
Mathison,
S. (1988). Why Triangulate? In Educational Researcher,
2. 1, pp.
Maxwell,
J. (1992). Understanding and validity in qualitative research.
Harvard Educational Review, 62.3, pp. 279 –
3000.
Oancea,
A., & Furlong, J. (2006). Assessing quality in applied and
practice-based educational research. Oxford: Oxford Department of
Educational Studies. Available at: http://www.bera.ac.uk/pdfs/Qualitycriteria.pdf
(accessed December 22nd 2008)
Oancea,
A., & Furlong, J. (2007). Expressions of excellence and the
assessment of applied and practice-based research. Research
Papers in Education, 22.2, pp. 119 –
137.
Pring, R. (2000).
Philosophy of Educational Research. London:
Continuum.
Pring, R. (2001). The
virtues and vices of an educational researcher. Journal of
Philosophy in Education, 35.3, 407 – 421.
Reichart,
C. & Rallis, S. (eds.) (1994). The qualitative-quantitative
debate:New perspectives. San Francisco: Jossey
Bass.
Robson, C. (2002). Real
world research. Oxford: Blackwell.
Schwandt,
T. (1996). Farewell to criteriology. Qualitative Inquiry.
2, 58 – 72.
Silverman,
D. (2000). Doing qualitative research. London:
SAGE.
Silverman,
D. (2006). Interpreting qualitative data. London:
SAGE.
Small, R. (2001). Codes
are not enough: What philosophy can contribute to the ethics of
educational research. Journal of Philosophy in
Education, 35.3, 387 –
406.
Strauss, A., & Corbin,
J. (1994). Grounded Theory methodology: An overview, In: Handbook
of Qualitative Research. In Denzin, N., K. and Lincoln, Y., S.,
Eds.). Sage Publications, London, 1-18.
Sutton,
B. (1993) The Rationale for qualitative research: a review of
principles and theoretical foundations. Library Quarterly,
63, 411 – 430.
Tashakkori,
A. & Teddlie, C. (2003). Major Issues and Controversies in the
Use of Mixed Methods in Social and Behavioural Sciences. In
Tashakkori, A. & Teddlie, C. (eds.), Handbook of Mixed
Methods in Social and Behavioural research. Thousand Oaks:
SAGE.
Tashakkori,
A. & Teddlie, C.1998) Mixed methodology: Combining qualitative
and quantitative approaches. Thousand Oaks:
SAGE.
Tooley,
J. & Darby, D. (1998). Educational research: A critique. London:
OFSTED.
Webster,
L., & Mertova, P. (2007). Using narrative
enquiry. London: Routledge.
Wellington,
J. (2000). Educational research. London:
Continuum.
Wildemuth,
B. (1993). Post-positivist research: two examples of methodological
pluralism. Library Quarterly 63, 450 –
468.
Winegardner,
K. E. (2007). The case study method of scholarly research. Retrieved
[13/10/2008] from
https://uascentral.uas.alaska.edu/onlinelib/Summer-2007/PADM635-JD1/Windgardner___case_study__research.pdf
(accessed on 23.12.08)
|