|Notes on Belson W (1977) 'Television Violence
Adolescent Boy', a paper based on an address given to the British
for the Advancement of Science, September 6th, Aston, Birmingham, UK
(and a try-out for the much larger book of the same name)
1. Gather and develop hypotheses about the supposed
boys of exposure to TV violence. Pilot samples of boys, their parents
professional broadcaster and others were interviewed for their views.
clippings were analysed. After some discussion, 22 principal hypotheses
emerged -- eg that high exposure to TV violence increases:
in violent behaviour; the use of bad language; pre-occupation with
callousness; acceptance of violence as a way to solve problems;
of violence etc. (The most important 19 hypotheses appear in Diagram
2. Develop techniques to measure the variables, especially (a) the extent of personal involvement in violent behaviour (b) the extent and nature of exposure to TV violence. The problem with the first variable is to get accurate information with no concealment or exaggeration. Much pilot testing and modification ensued leading to the final procedure -- (a) interviews took place in complete anonymity, with boys given false names (b) boys were asked about their behaviour over the last six months. They were asked to sort 53 cards containing descriptions of violent behaviour ('e.g. 'I have kicked somebody') into 'yes' and 'no' piles, screened from the interviewers (to avoid any possibility of interviewer influence). This was followed by 'a procedure designed to bring out any resistances he had to making truthful statements about his violent behaviour' [some sort of counselling?], and then boys were given a chance to re-examine and re-sort the 'no' pile. Questions were then asked about the items in the 'yes' pile -- what sort of act, how often in the last 6 months had it happened, what was the object of the act and so on. Interviewer variability was controlled by producing a book of instructions to govern the whole procedure.
The second variable was measured in a similar way, it seems (the paper is not too clear here). 68 TV programmes, broadcast between 1959--71 were offered (on the same sort of prompt cards?). These programmes had already been chosen and rated for violence by an expert panel (NB it is not uncommon for researchers and commentators to simply use their own definitions of what is a 'violent' programme). It seems the programmes were divided into the types described above (fictional violence, cartoon violence etc). Again, boys were asked about their exposure to these programmes. The method (described more fully in the book) was piloted and shown to have a 'high level of reproducibility' (one test of reliability, of course)
3. Formulate a strategy to investigate causal hypotheses
the context of the multi-determinant social scene'. It is
to isolate causals where so many factors are involved. Laboratory
is unrealistic. Instead, a statistical investigation is needed.
Let us just pursue this a minute, because much of the strength of Belson's study depends on some arguments which are assumed in the actual piece. We know that social situations are complex and many factors are at work which affect behaviour. How can we get to the really significant factors?
(a) We can theorise and debate -- engage in 'systematic challenging and…content analys[is]' as Belson puts it. I don't know what the Belson team actually did, so I'll make up my own examples. Content analysis might help us do some preliminary sorting -- if one expert claimed that 'hormones' were responsible for violence, and another that 'adolescence 'was, and we found that they really meant the same thing --we could reduce the two factors to one. To illustrate 'systematic challenge' let's take another absurd example --let's imagine that one of the experts interviewed suggested that drinking tea led to violent behaviour. We could debate this straight away -- does it sound right? Does it really help explain the facts as we know them? Is there any evidence to suggest that tea drinking is always found with violent behaviour, and abstinence from tea always accompanies non-violent behaviour? We would probably conclude that this tea drinking was not a very strong causal factor, even though the expert believed it was.
(b) We can build on this last point to do some statistics as well. We hinted that one technique involved observing cases to see if tea drinking was associated with violence. This is more or less what correlation does -- it is a statistical technique to compare the ways in which variables are related together (basically by comparing arithmetic means and shapes of the lines or curves of the distributions). If tea consumption rises, does violent behaviour also rise and by similar amounts? If we observed a fall in tea drinking, would we also find a fall in violent behaviour? Correlations offer this sort of comparison between variables -- tea drinking and violent behaviour would be perfectly correlated (measured in a particular way -- by calculating a correlation coefficient) if they varied in exactly the same way. Perfect correlation would be shown by a correlation coefficient of 1.0. If there was no comparison, they would have a correlation coefficient of 0.0, and if there was partial comparison, the coefficient would be 0.6 or 0.8 or whatever.
(c) If we do find a strong correlation (better than 0.5, say), what can we infer? A strong correlation offers some support for thinking that one variable causes the other. Why? Because if X is causing Y, we would expect to find X and Y associated a lot (since causes are usually found associated with, classically occurring before, their effects, no?). However, we cannot be sure that X causes Y. As we'll see, another factor could be causing both OR a pattern where Y causes X would also give a strong correlation (especially if we are not sure which one came first in terms of time). As we'll see, Belson tries to develop procedures to resolve these different possibilities. But correlation is the first step at least in finding causes -- it would be most unlikely that X caused Y if they were NOT found occurring together frequently. So -- strong correlations help to point to strong likely causes, weak correlations help you eliminate factors as a strong cause. An obvious step is to correlate all the factors with violent behaviour, and then concentrate on those showing strong (high) correlation coefficients -- that way we make some progress at least.
(d) There is another way to use correlations too. The factors might be correlated with each other as well as with the 'dependent variable' (violent behaviour in this case). This also helps reduce complexity. Why? Let us imagine that we have 10 likely variables that our experts have suggested. We have already indicated that some of them might really be pointing to the same thing, however, that the expert who said 'hormones' and the expert who said 'adolescence' really meant the same thing. Again we can get at this possibility using statistics, and not just relying on 'content analysis'. If those two variables were correlated together, and the correlation coefficient was close to 1.0 -- we could be confident they really were indicating the same thing, for all practical purposes. Correlations between variables can show up useful patterns (or 'clusters' or underlying 'factors') like this, and reduce complexity as a result. Instead of a whole range of variables to deal with, we can choose a few significant ones instead. It is a procedure like this which enabled the Plowden Committee to manage a huge amount of evidence on parental interests, incomes, state of housing stock, school resources and the like by isolating three main (underlying) factors to explain school underachievement -- 'material', 'cultural', and 'school' factors
(e) Finally, it might be worth re-emphasising the caution needed before talking of correlations as causes. Belson is very careful throughout and talks of 'support for causal analysis'. It is possible to say that no amount of careful statistics can ever lead conclusively to a cause. Ironically enough, causes require theories too. You need some theoretical reason for thinking that X causes Y before you can finally begin to stop worrying that all the time, some other variable, which you have not measured, is causing both. His own study fails this test, in fact, as we shall see, looking far ahead, -- he expected exposure to TV violence to cause violent behaviour by affecting boys' attitudes -- yet the study found no real evidence for this effect on attitudes, despite the other evidence demonstrating some associations between exposure to violent TV and violent behaviour directly. With his favourite explanation not supported, Belson is forced to be even more cautious, as we'll see.
Let us return to the study itself and follow Belson as he systematically pursues his rigorous approach. You need the following steps:
Formulate a nice testable hypothesis -- that 'High
television violence increases the degree to which boys engage in
behaviour'. In formal terms, 'high exposure to TV violence' is the
variable, and 'violent behaviour' the dependent one.
Disentangle these ambiguities. To disentangle ambiguity
Belson researched the backgrounds of both qualifiers and controls to
out if they had any other variables known to be linked with violent
No less than 227 such variables were listed. Obviously, the next stage
was to see if qualifiers and controls differed in terms of these
variables as well. However, with a careful researcher like Belson, we
not just make a simple comparison. First we need to find the really
variables for our purposes instead of using all 227, and then we have
find a way of using them to compare the groups.
What of ambiguity two? Maybe the hypothesis is right but in reverse? Here is where Belson claims to be on the verge of a breakthrough in 'reverse hypothesis testing'. This is explored a bit in the question and answer session after he presented his paper. Very basically (I am no statistician), Belson pursued a strategy rather like the one to eliminate ambiguity one, regrouping and re-ordering the data in his study to test the reverse hypothesis (that violent behaviour causes the watching of violent TV). Again we divide the sample into controls and qualifiers (but this time in terms of the levels of violence which they commit, not the amount of violent TV they watch). We still have to eliminate ambiguity one first, and we do this by the artificial regrouping and 'matching' process described above. If any differences remain after the background variables are controlled, we might have some real effect. We then compare the results for the forward hypothesis with those for the reverse hypothesis -- specifically, we look at the shape of the relationship (measured by correlations or by the graph of the relationship). Are the correlations stronger with the forward or the backward hypothesis? (More technically, which hypothesis explains best the differences between qualifiers and controls?) The better fit is assumed to yield the better hypothesis.(NB if the two are the same, we cannot eliminate ambiguity two). Luckily, the forward version of the hypothesis generally DID fit the evidence better (i.e. explained the differences between qualifiers and controls). However, even here this was NOT so strongly demonstrated for all types of violence and violent TV, especially for less serious violence (including 'aggression in sport or play' and 'an increase in swearing or bad language use')
4. Drawing a representative sample (discussed a bit
the question and answer session). First, a stratified random sample of
electoral wards was drawn within a 12 mile radius of central London.
each ward, a random sample was drawn of 40 'starting points'
In each starting point, interviewers asked if a boy 13-16 years lived
If so, he was interviewed. If not, interviewers proceeded on a fixed
(turning left or right, taking the next house or missing one etc) until
they found a 13-16 year old boy to interview. Supervisors ensured the
procedure was followed. If the trail went cold, a new starting point
issued. 21,600 homes were visited in all in this way. Of those
as having a suitable boy at home (looks like about 1 in 10), 82%
a suitable interviewee, producing 1650 in all for the interviews (so
refused) (and subsequently 1565 for follow ups).
Let's pause. Whatever your views about statistics (and I have known students who think using statistics is metaphysically evil in some profound sense), you have to admit that this is a very careful study indeed. It is a bit slippery to follow -- especially where Belson uses statistical techniques as if they were simply obvious or uncontroversial. And, of course, there are assumptions in using statistical tests, which statisticians rarely discuss (for another example, see my account of Fraser's famous study). But at least those assumptions are available to questioners. And if they are made, the study follows with an impeccable logic, pursued even to an inconvenient conclusion. Compared to many other studies, and certainly compared to the strongly held feelings of many commentators on this issue, this is close to being about as openly 'objective' as you are likely to get. It may indeed be too objective for many readers -- as we shall see the findings give no easy support to either side in the dispute about violent TV and its effects.
1. '…we cannot speak of firm proof in relation to the phenomena here investigated'
2. '…there is no suggestion …that other factors -- physical, psychological, environmental -- do not [also] enter in a causally potent way into the development of violence in a society….it would be ridiculous to suggest such a thing. It just happens that we are focusing our attention in this study upon the influence of television violence in its own right.'
3. Acts of violence seem quite common among London boys, 'although the greater part of them took the form of rough mischief and aggravation'. However, of acts rated (by experts) as 'serious violence' there were more than 9,000 cases in the six-month period under study (average being approximately 6 acts each). About 12% were 'repetitive' committers (and these were really responsible for the high average -- about half the boys were never involved at all). Generally truants, school dislikers, kids from large families and kids from 'lower socio-economic groupings' were more likely to be committers.
4. Diagram 1 presents the main findings:
Belson says that this table shows support for the hypothesis in the first four cases (but not in the last case), and that the most worrying is the support for the view that exposure to violent TV causes serious violence (although pointing out that a kind of hard core of only 12% of the boys committed that sort of act)
Diagram 2 summarises the more specific findings. Don't panic! (Christ these graphics are naff)
This was really tricky to modify and lay out for a website (and what a crappy result!!). I have omitted some variables altogether from the original (hence the missing numbers of the variables) , since it helps to reduce complexity to do so (and they were not really important or interesting -- although Belson sometimes disagrees!).
This can be a scary diagram to read, but you can cope if you think of it in the usual way as having rows (horizontal) and columns (vertical). The next step is to pick particular rows or columns and blank out the rest (physically, with a bit of paper or card if it helps).
So: look down the categories of violence in column 1 and find your favourite. Let's take violence in a domestic or family setting (variable number 4). Now look along that row. There is a shaded square in column 2 (which indicates 'violent behaviour: weighted total of all acts'). That shaded square indicates that there is a 'moderate amount' of support from the evidence that a causal relationship exists -- that exposure to violence on TV depicted in a domestic or family setting causes violent behaviour (general). The next column indicates that there is moderate support for a causal relation between this kind of TV violence and serious violent behaviour. Are you surprised by this finding? Is it more or less of a causal connection than you thought? Does this prove that exposure to TV violence of this type causes violent behaviour? Should you be concerned about exposing yourself or your kids to such depictions of violence?
The last two questions are open to debate, aren't they? Much depends on how 'moderate' the support is for a causal relationship of this kind. You have to bear in mind all the cautious comments we considered in the procedure section too. You might need to think carefully about this finding -- this variable seems not to be related to any other kind of violence, which is rather odd. One anomaly in particular leaps out really -- there seems no connection at all between this sort of TV violence and the last three types of violence (variables 11, 12 and 20 in the last three columns). Finally as a person, you have to consider what risks you might run here. Looking down the rows again, there are more dubious types of violence, where the evidence offers a 'fairly large amount of support' for a causal relation (darker shading) -- eg 'fictional realistic violence' or 'comics' or 'newspapers'. Apart from that, there are really rather mixed findings here, with lots of gaps, questions marks or 'NM' signs.
Belson's own conclusions
These do not line up always with the table in Diagram 2. Sometimes it is my fault because I have not copied all the data that Belson finds significant. Sometimes I think Belson takes a bit of a liberty with his own data, though. I'll indicate this as we go down. Here are the man's own findings:
'The hypotheses getting most support were: Serious violence is
by long term exposure to:
This is an important section of the findings, we have argued.
thought that attitudes were the key causals involved in the
between watching TV violence and acting violently. We have already
that a strong causal theory is essential to clinch the statistical work
on causality. However:
'by and large, the evidence tended not to support the attitude-type hypotheses'. This is indicated in the table by looking at the last three columns which refer to attitudes -- and there is only one small association with any of the independent variables -- between row 9 and column 11 --and even here this is accompanied with a degree of uncertainty.
To cut back on the total amount of violence shown [and this
was in the
1970s!! What would he make of TV in the 1990s?]
We have come a long way with Belson, seen a lot of thorough and careful work undertaken ( and a lot of money spent) -- and we are left with a plea for more research! I am afraid this is quite typical in this field. The data doesn't really support the strong opinions on either side. On the one hand, those who deny any effect of TV violence have to admit that this careful study did show some positive relationships between types of violence on TV and types of violent action in boys. On the other hand, the results are not consistently conclusive, and the key causal mechanisms have failed to attract much support.
Now of course this was an early study, and much might have changed since. Has the frequency or intensity of violence on TV got worse, do you think, for example? I would want to look at the evidence myself before even admitting that. Has the audience got more sophisticated about violence -- or more 'disinhibited'?
Newer methods became much more fashionable, as hinted at by Glover (in Haralambos M (ed) (1985) Sociology: New Directions, Ormskirk: Causeway Press). Why not study the audience and what they mean by violence, and ask them whether they think they have been affected? Ethnographic work claims to be able to do just that, of course, instead of approaching the whole issue so 'objectively' and indirectly as did Belson and the others. Unfortunately, ethnography has its problems too -- perhaps respondents will exaggerate or minimise or rationalise the effects? What about unconscious or long term effects again? How can we display ethnographic data so as to make conclusions clear and open to debate? Not everyone is capable of telling others exactly how they felt after watching TV -- and there is a danger that ethnographers will get to study the more reflective and interested viewers rather than typical viewers.
Finally, I hope you can see that there simply are no right procedures and probably no simple right answers in this area. If trained researchers end their work with uncertainty, it is likely that that is because it is a very uncertain matter. Of course, we have already suggested that one way out of the uncertainty is to use your common sense or your personal beliefs to decide - -and parents just have to do that in the end. However, it is quite another matter when politicians, pundits and self-appointed guardians of morality want to regulate the rest of us, especially when they quote impressive-looking statistical evidence to support them -- you can be sure that somebody somewhere is simplifying a very complex issue!