Notes on Belson W (1977) 'Television Violence and the Adolescent Boy', a paper based on an address given to the British Association for the Advancement of Science, September 6th, Aston, Birmingham, UK (and a try-out for the much larger book of the same name)

Background
A representative sample of boys, 13--16 (n=1565). To study the effects of long-term exposure to television violence.

Procedure

1. Gather and develop hypotheses about the supposed effects on boys of exposure to TV violence. Pilot samples of boys, their parents and professional broadcaster and others were interviewed for their views. Press clippings were analysed. After some discussion, 22 principal hypotheses emerged -- eg that high exposure to TV violence increases: participation in violent behaviour; the use of bad language; pre-occupation with violence; callousness; acceptance of violence as a way to solve problems; attractiveness of violence etc. (The most important 19 hypotheses appear in Diagram 2).
Further, different types of violence were identified as having different possible effects: fictional violence violence performed by 'good guys'; cartoon violence; violence in the news, violence in westerns, violence in sporting programmes -- and so on (for a fuller list see Diagram 2). In this way, the team were able to study general and specific effects.

2. Develop techniques to measure the variables, especially (a) the extent of personal involvement in violent behaviour (b) the extent and nature of exposure to TV violence. The problem with the first variable is to get accurate information with no concealment or exaggeration. Much pilot testing and modification ensued leading to the final procedure -- (a) interviews took place in complete anonymity, with boys given false names (b) boys were asked about their behaviour over the last six months. They were asked to sort 53 cards containing descriptions of violent behaviour ('e.g. 'I have kicked somebody') into 'yes' and 'no' piles, screened from the interviewers (to avoid any possibility of interviewer influence). This was followed by 'a procedure designed to bring out any resistances he had to making truthful statements about his violent behaviour' [some sort of counselling?], and then boys were given a chance to re-examine and re-sort the 'no' pile. Questions were then asked about the items in the 'yes' pile -- what sort of act, how often in the last 6 months had it happened, what was the object of the act and so on. Interviewer variability was controlled by producing a book of instructions to govern the whole procedure.

The second variable was measured in a similar way, it seems (the paper is not too clear here). 68 TV programmes, broadcast between 1959--71 were offered (on the same sort of prompt cards?). These programmes had already been chosen and rated for violence by an expert panel (NB it is not uncommon for researchers and commentators to simply use their own definitions of what is a 'violent' programme). It seems the programmes were divided into the types described above (fictional violence, cartoon violence etc). Again, boys were asked about their exposure to these programmes. The method (described more fully in the book) was piloted and shown to have a 'high level of reproducibility' (one test of reliability, of course)

3. Formulate a strategy to investigate causal hypotheses 'in the context of the multi-determinant social scene'.  It is difficult to isolate causals where so many factors are involved. Laboratory testing is unrealistic. Instead, a statistical investigation is needed.



Let us just pursue this a minute, because much of the strength of Belson's study depends on some arguments which are assumed in the actual piece. We know that social situations are complex and many factors are at work which affect behaviour. How can we get to the really significant factors?

(a) We can theorise and debate -- engage in 'systematic challenging and…content analys[is]' as Belson puts it. I don't know what the Belson team actually did, so I'll make up my own examples. Content analysis might help us do some preliminary sorting -- if one expert claimed that 'hormones' were responsible for violence, and another that 'adolescence 'was, and we found that they really meant the same thing --we could reduce the two factors to one. To illustrate 'systematic challenge' let's take another absurd example --let's imagine that one of the experts interviewed suggested that drinking tea led to violent behaviour. We could debate this straight away -- does it sound right? Does it really help explain the facts as we know them? Is there any evidence to suggest that tea drinking is always found with violent behaviour, and abstinence from tea always accompanies non-violent behaviour? We would probably conclude that this tea drinking was not a very strong causal factor, even though the expert believed it was.

(b) We can build on this last point to do some statistics as well. We hinted that one technique involved observing cases to see if tea drinking was associated with violence. This is more or less what correlation does -- it is a statistical technique to compare the ways in which variables are related together (basically by comparing arithmetic means and shapes of the lines or curves of the distributions). If tea consumption rises, does violent behaviour also rise and by similar amounts? If we observed a fall in tea drinking, would we also find a fall in violent behaviour? Correlations offer this sort of comparison between variables -- tea drinking and violent behaviour would be perfectly correlated (measured in a particular way -- by calculating a correlation coefficient) if they varied in exactly the same way. Perfect correlation would be shown by a correlation coefficient of 1.0. If there was no comparison, they would have a correlation coefficient of 0.0, and if there was partial comparison, the coefficient would be 0.6 or 0.8 or whatever.

(c) If we do find a strong correlation (better than 0.5, say), what can we infer? A strong correlation offers some support for thinking that one variable causes the other. Why? Because if X is causing Y, we would expect to find X and Y associated a lot (since causes are usually found associated with, classically occurring before, their effects, no?). However, we cannot be sure that X causes Y. As we'll see, another factor could be causing both OR a pattern where Y causes X would also give a strong correlation (especially if we are not sure which one came first in terms of time). As we'll see, Belson tries to develop procedures to resolve these different possibilities. But correlation is the first step at least in finding causes -- it would be most unlikely that X caused Y if they were NOT found occurring together frequently. So -- strong correlations help to point to strong likely causes, weak correlations help you eliminate factors as a strong cause. An obvious step is to correlate all the factors with violent behaviour, and then concentrate on those showing strong (high) correlation coefficients -- that way we make some progress at least.

(d) There is another way to use correlations too. The factors might be correlated with each other as well as with the 'dependent variable' (violent behaviour in this case). This also helps reduce complexity. Why? Let us imagine that we have 10 likely variables that our experts have suggested. We have already indicated that some of them might really be pointing to the same thing, however, that the expert who said 'hormones' and the expert who said 'adolescence' really meant the same thing. Again we can get at this possibility using statistics, and not just relying on 'content analysis'. If those two variables were correlated together, and the correlation coefficient was close to 1.0 -- we could be confident they really were indicating the same thing, for all practical purposes. Correlations between variables can show up useful patterns (or 'clusters' or underlying 'factors') like this, and reduce complexity as a result. Instead of a whole range of variables to deal with, we can choose a few significant ones instead. It is a procedure like this which enabled the Plowden Committee to manage a huge amount of evidence on parental interests, incomes, state of housing stock, school resources and the like by isolating three main (underlying) factors to explain school underachievement -- 'material', 'cultural', and 'school' factors

(e) Finally, it might be worth re-emphasising the caution needed before talking of correlations as causes. Belson is very careful throughout and talks of 'support for causal analysis'. It is possible to say that no amount of careful statistics can ever lead conclusively to a cause. Ironically enough, causes require theories too. You need some theoretical reason for thinking that X causes Y before you can finally begin to stop worrying that all the time, some other variable, which you have not measured, is causing both. His own study fails this test, in fact, as we shall see, looking far ahead, -- he expected exposure to TV violence to cause violent behaviour by affecting boys' attitudes -- yet the study found no real evidence for this effect on attitudes, despite the other evidence demonstrating some associations between exposure to violent TV and violent behaviour directly. With his favourite explanation not supported, Belson is forced to be even more cautious, as we'll see.


Let us return to the study itself and follow Belson as he systematically pursues his rigorous approach. You need the following steps:

Formulate a nice testable hypothesis -- that 'High exposure to television violence increases the degree to which boys engage in violent behaviour'. In formal terms, 'high exposure to TV violence' is the independent variable, and 'violent behaviour' the dependent one.
Compare two groups of boys in the sample -- 'qualifiers' and 'controls'. 'Qualifiers' are those boys who have high scores in terms of exposure to TV violence, 'controls' have low scores. The sample was divided into halves in this way. 
Search for any differences in terms of scores of violent behaviour. If the qualifiers also have higher scores of actual violent behaviour, the hypothesis looks promising -- but not yet proven. There are other possible explanations for any differences between qualifiers and controls! -- ambiguity one: qualifiers and controls may also differ in terms of their characteristics or backgrounds anyway, and it might be these differences that explain different rates of violent behaviour (e.g. more qualifiers might turn out to have come from violent homes). Ambiguity two: those boys already involved in violent behaviour might be encouraged to watch more violent TV ( the hypothesis but in reverse). NB These ambiguities affect many studies where correlations are used to try to establish causality.

Disentangle these ambiguities. To disentangle ambiguity one, Belson researched the backgrounds of both qualifiers and controls to find out if they had any other variables known to be linked with violent behaviour. No less than 227 such variables were listed. Obviously, the next stage was to see if qualifiers and controls differed in terms of these background variables as well. However, with a careful researcher like Belson, we do not just make a simple comparison. First we need to find the really important variables for our purposes instead of using all 227, and then we have to find a way of using them to compare the groups. 
To do this a clever sorting technique was pursued to try and locate these significant variables. First Belson found the background variable which is most strongly correlated with violent behaviour. However, not just any variable will do. We want one that is also associated with exposure to violent TV. Let us imagine that having a father convicted for violence is such a variable -- ideally, it would be (a) strongly correlated with violent behaviour in boys, and (b) also strongly correlated with watching violent TV (and thus a source of major differences between qualifiers and controls, to use Belson's more technical terms) . In fact, this clever variable was 'stole sweets from shop'. All the variables were assessed in this way until we get to variables with  no significant correlations or differences. 
Then qualifiers and controls are compared again against these background variables, again in a particular way. The 'controls' are artificially regrouped, deliberately so that they are the same as the qualifiers in terms of the important background variables (so they become the 'modified controls'). This gives a picture of what the controls would have looked like (in terms of the violent behaviour they admitted to) if they had been the same as the qualifiers. The differences between what they would have looked like, and what they actually did look like in terms of violent behaviour admitted helps you gauge the effect of the background variables -- once they have been equated, so to speak ('matched' is Belson's term) there should be no difference IF those background factors were the most important in determining levels of admitted violence. However, Belson found there were still significant differences, even after background variables had been 'matched' -- hence there is still support for the original hypothesis (note the cautious terminology), and ambiguity one is dealt with.

What of ambiguity two? Maybe the hypothesis is right but in reverse? Here is where Belson claims to be on the verge of a breakthrough in 'reverse hypothesis testing'. This is explored a bit in the question and answer session after he presented his paper. Very basically (I am no statistician), Belson pursued a strategy rather like the one to eliminate ambiguity one, regrouping and re-ordering the data in his study to test the reverse hypothesis (that violent behaviour causes the watching of violent TV). Again we divide the sample into controls and qualifiers (but this time in terms of the levels of violence which they commit, not the amount of violent TV they watch). We still have to eliminate ambiguity one first, and we do this by the artificial regrouping and 'matching' process described above. If any differences remain after the background variables are controlled, we might have some real effect. We then compare the results for the forward hypothesis with those for the reverse hypothesis -- specifically, we look at the shape of the relationship (measured by correlations or by the graph of the relationship). Are the correlations stronger with the forward or the backward hypothesis? (More technically, which hypothesis explains best  the differences between qualifiers and controls?) The better fit is assumed to yield the better hypothesis.(NB if the two are the same, we cannot eliminate ambiguity two). Luckily, the forward version of the hypothesis generally DID fit the evidence better (i.e. explained the differences between qualifiers and controls). However, even here this was NOT so strongly demonstrated for all types of violence and violent TV, especially for less serious violence (including 'aggression in sport or play' and 'an increase in swearing or bad language use')

4. Drawing a representative sample (discussed a bit further in the question and answer session). First, a stratified random sample of electoral wards was drawn within a 12 mile radius of central London. Within each ward, a random sample was drawn of  40 'starting points' (houses). In each starting point, interviewers asked if a boy 13-16 years lived there. If so, he was interviewed. If not, interviewers proceeded on a fixed route (turning left or right, taking the next house or missing one etc) until they found a 13-16 year old boy to interview. Supervisors ensured the random procedure was followed. If the trail went cold, a new starting point was issued. 21,600 homes were visited in all in this way. Of those identified as having a suitable boy at home (looks like about 1 in 10), 82% yielded a suitable interviewee, producing 1650 in all for the interviews (so 18% refused) (and subsequently 1565 for follow ups).



Let's pause. Whatever your views about statistics (and I have known students who think using statistics is metaphysically evil in some profound sense), you have to admit that this is a very careful study indeed. It is a bit slippery to follow -- especially where Belson uses statistical techniques as if they were simply obvious or uncontroversial. And, of course, there are assumptions in using statistical tests, which statisticians rarely discuss (for another example, see my account of Fraser's famous study). But at least those assumptions are available to questioners. And if they are made, the study follows with an impeccable logic, pursued even to an inconvenient conclusion. Compared to many other studies, and certainly compared to the strongly held feelings of many commentators on this issue, this is close to being about as openly 'objective' as you are likely to get. It may indeed be too objective for many readers -- as we shall see the findings give no easy support to either side in the dispute about violent TV and its effects.


The findings

1. '…we cannot speak of firm proof in relation to the phenomena here investigated'

2. '…there is no suggestion …that other factors -- physical, psychological, environmental -- do not [also] enter in a causally potent way into the development of violence in a society….it would be ridiculous to suggest such a thing. It just happens that we are focusing our attention in this study upon the influence of television violence in its own right.'

3. Acts of violence seem quite common among London boys, 'although the greater part of them took the form of rough mischief and aggravation'. However, of acts rated (by experts) as 'serious violence' there were more than 9,000 cases in the six-month period under study (average being approximately 6 acts each). About 12% were 'repetitive' committers (and these were really responsible for the high average -- about half the boys were never involved at all). Generally truants, school dislikers, kids from large families and kids from 'lower socio-economic groupings' were more likely to be committers.

4. Diagram 1 presents the main findings:

Commentary
Don't worry about the modified controls and the significance columns unless you want to. I hope you can see differences in the scores for qualifiers and controls in each case (columns 2  and 3). There is much less of a difference between the two groups for the final hypothesis about violence in the company of other boys (and much more likelihood of the difference occurring purely by chance anyway).

Belson says that this table shows support for the hypothesis in the first four cases (but not in the last case), and that the most worrying is the support for the view that exposure to violent TV causes serious violence (although pointing out that a kind of hard core of only 12% of the boys committed that sort of act)

Diagram 2 summarises the more specific findings. Don't panic! (Christ these graphics are naff)


 

Commentary

This was really tricky to modify and lay out for a website (and what a crappy result!!). I have omitted some variables altogether from the original (hence the missing numbers of the variables) , since it helps to reduce complexity to do so (and they were not really important or interesting -- although Belson sometimes disagrees!). 

This can be a scary diagram to read, but you can cope if you think of it in the usual way as having rows (horizontal) and columns (vertical). The next step is to pick particular rows or columns and blank out the rest (physically, with a bit of paper or card if it helps). 

So: look down the categories of violence in column 1 and find your favourite. Let's take violence in a domestic or family setting (variable number 4). Now look along that row. There is a shaded square in column 2 (which indicates 'violent behaviour: weighted total of all acts'). That shaded square indicates that there is a 'moderate amount' of support from the evidence that a causal relationship exists -- that exposure to violence on TV depicted in a domestic or family setting causes violent behaviour (general). The next column indicates that there is moderate support for a causal relation between this kind of TV violence and serious violent behaviour. Are you surprised by this finding? Is it more or less of a causal connection than you thought? Does this prove that exposure to TV violence of this type causes violent behaviour? Should you be concerned about exposing yourself or your kids to such depictions of violence?

The last two questions are open to debate, aren't they? Much depends on how 'moderate' the support is for a causal relationship of this kind. You have to bear in mind all the cautious comments we considered in the procedure section too. You might need to think carefully about this finding -- this variable seems not to be related to any other kind of violence, which is rather odd. One anomaly in particular leaps out really -- there seems no connection at all between this sort of TV violence and the last three types of violence (variables 11, 12 and 20 in the last three columns). Finally as a person, you have to consider what risks you might run here. Looking down the rows again, there are more dubious types of violence, where the evidence offers a 'fairly large amount of support' for a causal relation (darker shading) -- eg 'fictional realistic violence' or 'comics' or 'newspapers'. Apart from that, there are really rather mixed findings here, with lots of gaps, questions marks or 'NM' signs.

Belson's own conclusions

These do not line up always with the table in Diagram 2. Sometimes it is my fault because I have not copied all the data that Belson finds significant. Sometimes I think Belson takes a bit of a liberty with his own data, though. I'll indicate this as we go down. Here are the man's own findings:

'The hypotheses getting most support were: Serious violence is increased by long term exposure to:
 

  • plays or films in which close personal relationships are major themes and which features verbal or physical violence' [The original table shows only 'moderate support' for this separate factor as a cause of serious violence, and it is associated with very few others. I left it out of my table altogether!! Verbal violence on its own seemed  more powerful, as did 'gruesome, scary and horrific violence']
  • programmes where the violence is [gratuitous]' [Odd -- unless I have a poor copy of the table, this is not shown at all!! There is a connection with violence in general though, but not specifically serious violence] 
  • programmes featuring fictional violence of a realistic kind [this is illustrated by my version of the table, thank goodness!]
  • programmes in which violence is presented as being in a good cause [didn't look to be associated on my version of the Table -- I didn't record this one]
  • Westerns of the violent kind [associated with violence in general but not with serious violence in my version -- but gangsters were!]'


'Hypotheses [with] little support were those [linking TV violence] with:
 

  • sporting programmes [ although wrestling and boxing -- recorded separately on the original but not in my version -- were associated with moderate support with bad language and swearing, although, curiously, not with violence in general!!]
  • cartoon programmes [check! or for British viewers tick!]
  • science fiction programmes [check!]
  • comedy programmes [although there was a moderate relationship with violence in general, and, on the diagram, a very puzzling symbol appears -- offering a 'large amount of support' yet with a 'very uncertain' level of confidence!! One can only say …wha…?]


Attitudes

This is an important section of the findings, we have argued. Belson thought that attitudes were the key causals involved in the relationship between watching TV violence and acting violently. We have already argued that a strong causal theory is essential to clinch the statistical work on causality. However:
 

'by and large, the evidence tended not to support the attitude-type hypotheses'. This is indicated in the table by looking at the last three columns which refer to attitudes -- and there is only one small association with any of the independent variables -- between row 9 and column 11 --and even here this is accompanied with a degree of uncertainty.


Belson resorts to a common but rather dubious procedure to get around this problem. It must all be explained by unconscious attitudes. He introduces a new explanation -- 'disinhibition' -- 'the progressive reduction, through the constant presentation of television violence, of those inhibitions or constraints that the socialising agencies in a community had built up in boys against being violent'. Well, this might do the trick but:
 

  1. This variable is introduced right at the end of the study -- despite all the care taken to elicit hypotheses at the start, no-one seems to have identified this one, let alone measured it
  2. Unconscious factors are notoriously difficult to measure and pin down, and it is surprising to find them wheeled out here in this 'objective' study. They can be supposed to do anything, and there is a strong suspicion that they are often used mostly to prop up dodgy but well-loved views. I know of a study on racism on TV which could not accept that the evidence for an effect was rather thin. This so contradicted what the authors wanted to believe and know, that our old friend the 'unconscious effect' was produced to square the circle. It happened in a rather circular way too - the authors just KNEW watching TV caused or magnified racism, it just 'must' do: there COULD only be long-term, unconscious processes at work (see Braham P in the 1977 Open University course DE353 -- Mass Communication and Society).
  3. Belson's account is riddled with assumptions. One concerns the rather behaviourist notion that constant repetition alone causes effects (this is picked up in Glover's discussion of his work). Another is recognisably functionalist - that 'a community' 'socialises' kids (normally) to be non-violent, and this natural equilibrium has been disturbed by the recent intrusion of TV. For a 'community' like Britain which has recently fought two world wars, not to mention a number of nasty violent post-colonialist skirmishes, the idea of a naturally non-violent community looks a bit  strained.
  4. As Belson himself admits: '[This] is, of course, an hypothesis about underlying processes and it must, I stress, be subject to empirical investigation'.


Belson is confident enough to recommend new regulations for TV though:

To cut back on the total amount of violence shown [and this was in the 1970s!! What would he make of TV in the 1990s?]
To strengthen specific guidelines [focused on the more damaging types as identified in his research]
To carry on monitoring

Concluding Thoughts

We have come a long way with Belson, seen a lot of thorough and careful work undertaken ( and a lot of money spent) -- and we are left with a plea for more research! I am afraid this is quite typical in this field. The data doesn't really support the strong opinions on either side. On the one hand, those who deny any effect of TV violence have to admit that this careful study did show some positive relationships between types of violence on TV and types of violent action in boys. On the other hand, the results are not consistently conclusive, and the key causal mechanisms have failed to attract much support. 

Now of course this was an early study, and much might have changed since. Has the frequency or intensity of violence on TV got worse, do you think, for example? I would want to look at the evidence myself before even admitting that. Has the audience got more sophisticated about violence -- or more 'disinhibited'?

Newer methods became much more fashionable, as hinted at by Glover (in Haralambos M (ed) (1985) Sociology: New Directions, Ormskirk: Causeway Press). Why not study the audience and what they mean by violence, and ask them whether they think they have been affected? Ethnographic work claims to be able to do just that, of course, instead of approaching the whole issue so 'objectively' and indirectly as did Belson and the others. Unfortunately, ethnography has its problems too -- perhaps respondents will exaggerate or minimise or rationalise the effects? What about unconscious or long term effects again? How can we display ethnographic data so as to make conclusions clear and open to debate? Not everyone is capable of telling others exactly how they felt after watching TV -- and there is a danger that ethnographers will get to study the more reflective and interested viewers rather than typical viewers.

Finally, I hope you can see that there simply are no right procedures and probably no simple right answers in this area. If trained researchers end their work with uncertainty, it is likely that that is because it is a very uncertain matter. Of course, we have already suggested that one way out of the uncertainty is to use your common sense or your personal beliefs to decide - -and parents just have to do that in the end. However, it is quite another matter when politicians, pundits and self-appointed guardians of morality want to regulate the rest of us, especially when they quote impressive-looking statistical evidence to support them -- you can be sure that somebody somewhere is simplifying a very complex issue!

 back to main commentary