Parry-Langdon, N., Bloor, M., Audrey, S. and Holliday, J.  (2003)  'Process evaluation of health promotion interventions', in Policy and Politics, 31 (2): 207-16.

Process evaluation is required in areas such as health promotion, because measuring outcomes or impacts alone is inadequate:  (a) anticipated outcomes are difficult to define and measure, especially long term matters such as empowerment;  (b) different parties have different interests and different criteria. Evaluating processes helps identify what works well and thus helps to pin down the precise effects of the intervention programme itself in what are usually complex outcomes. [An example of the complexity is given on 209 - 10 -- behaviour changes were noted for both experimental and control groups. In another example, the experiment was contaminated by widely diffused health information programmes crossing between the groups]. Current WHO guidance insists on a mixture of process and outcome initiatives.

One framework for an evaluation of health promotion (Nutbeam, 1998*) identifies important questions such as:  'Did the solution work? Can the programme be repeated/refined? Can the programme be widely reproduced?'. In turn this leads to issues such as  '... did the programme reach all of the target population?... is the programme acceptable to the target population? .. was the programme implemented as planned?' (209).

There has been a debate about suitable methods, and some signs of a  'tension between positivist and interpretivist paradigms' [although the authors themselves refer to work such as Hammersley suggesting that methods such as participant-observation span both paradigms. Nevertheless they suggest that methods have been chosen as a result of taking sides in this dispute]. Process evaluation tends to use interviews, focus groups and ethnographic methods, and this can offer problems when set against quantitative measures. However, 'In practice, process evaluators... have no option other than to go about their business in a prudent fashion, remembering that they are operating on disputed ground' (209). They make pragmatic choices, and focus on trying to find out what makes an intervention work --'a focus on the relation between context, mechanism and outcome' (209). 

In the first case study, there was an attempt to change the culture of schools to reduce smoking among students, in particular to train 'peer supporters'. The project was evaluated using a  'cluster randomised controlled trial', based on 30 schools who were randomly allocated the programme, and 29 who were not and who then served as a control group. Suggested measures of outcome included changes in smoking, and changes in  'perception of norms regarding adolescent smoking and intention to quit'. Pupils were tested on the smoking behaviour both through questionnaire data and  'pupils saliva samples', gathered at baseline, plus 10 weeks, and plus one year. Four schools were selected for a more in depth process evaluation. All those involved would be asked for their views, via discussion groups, interviews, observations, and diary sheets. Activities by agencies such as local authorities or the media would also be noted.

In the second one, volunteers worked with adults to plan to reduce smoking in schools. Six schools using the plan were matched with six control schools. Questionnaires provided the data about pupil behaviour, while process evaluation used more qualitative methods in the intervention schools, including a one-day workshop inviting participants to comment on the success of the project, and semi structured interviews. Again, background information is also noted.

[The full range of activities is provided in a table on page 212. It involves interviewing key staff, liaison staff, and classroom teachers at various stages of the training, using self completed questionnaires to test knowledge of matters such as school policies and opportunities on the curriculum; interviews to assess impact and value of the intervention and its fit with existing school policies; observations to see how pupils were told about the programme; interviews on perceived successes and failures; questionnaires to track any changes].

[The article then goes on to examine the use of interviews in particular.] Doing interviews can be difficult in the school setting with busy teachers and pupils, fitting interviews with timetables, and so on]. Quite detailed discussions of administration and implementation problems resulted. Interviews were used in particular to pick up matters such as impact in the context of the school. There is no attempt to see interview data as providing natural accounts, and suitable caution is required. However, interviews provide a reasonable way to access perceptions of different groups. They provide a way for participants to speak freely in confidence, and thus may cover different ground than a focus group. Semi structured interviews enable follow-up of issues raised by participants, unlike questionnaires. Analysis can be time-consuming, and 'some data may not ever be fully analysed' (214) .

Using multiple methods does not always increase the validity of the findings. It is possible to use multiple methods either to cover a complex field, or deliberately to focus on the same issues  ['triangulation'?].

The approaches summarised offer a  '"top-down"  mode of intervention' (214), although they aim at impacting individual behaviours  [the authors are aware of the importance of the school context specifically here]. Similarly, the school context limits the possibilities, since the curriculum is full already. Nevertheless, there are few problems of the kind involved in community initiatives, where members have to be persuaded to participate, and where multiple agencies are operating.

Overall, process evaluations are essential as well as impact evaluations, particularly for answering questions of policy effectiveness.  'Ideally, every intervention should have a systematic process evaluation' (215). Methods used are controversial, but interviews can play a critical part  'in a pragmatic and realist research strategy' (215).

*Nutbeam, D.  (1998)  'Evaluating health promotion -- progress, problems and solutions', in  Health Promotion International, 13  (1): 27 - 44

back to key concepts