In the early discussions of the possible forms an open university might take, certain problems soon emerged. 1 Teaching 'at a distance', using a correspondence system, would offer considerable advantages, especially in terms of cost, yet there were a number of difficulties in implementing a scheme on the scale intended. How could student involvement be encouraged, and drop-out minimized (see file) ? Could university standards be maintained on the courses at the same time as student intake was to be'opened'? How could the teaching system be designed and controlled to meet its aims in as efficient a manner as possible? It is possible to argue that these problems would have appeared insuperable had it not been for the reassurances provided by the educational technologists.
Educational technology was then a relatively new discipline in Britain which promised to offer radical solutions to these pressing problems . In its current versions, educational technology still appears as a rational way forward in design and evaluation, basking this time in the popular and glamorous aura of computers and 'information technology', and still promising a 'technological fix' for educational problems. The largely uncritical reception of these technologies will be discussed later, but the need to examine the promise and performance of educational technology at the British Open University, its best developed 'application', is of considerable contemporary relevance.
Any open university teaching 'at a distance' would clearly face design problems. Special channels of communication would have to be organized, both to transmit educational materials and to receive student responses. At the transmission phase, university academics would have to learn to master the technologies of print and broadcasting, and to work together cooperatively to produce suitably modern 'interdisciplinary' courses, especially at Foundation level. Given the pragmatic spirit which prevailed in the early days, however, the first problems to emerge were simply managerial ones.
Academics had to devise course materials that were to be printed and broadcast: this meant that the materials had to be ready well in advance of the start of the course. The BBC in particular often needed to be able to book studios and crews up to six months in advance of the date of broadcast in the early days. Preparation of materials faced writers with novel constraints. There were limits to the volume of material, for example: programmes could only run for twenty minutes, and course correspondence packages ('units') had to be a standard length, partly to enable standard postal charges to be utilized. Generally, course materials had to be prepared, in full and unusual detail, ready to be processed, edited, laid out for artwork, or for recording, well before presentation to students.
An unusually tight production schedule was required to make the system work at all. Open University courses could not be planned a mere week or two in advance, nor amended at short notice. Nor could courses be written according to the rather vague conventions of university lectures: notions of 'good television' or 'good writing' became central, partly because the early planners had no wish to be associated with low status 'cyclostyled notes', and partly because the media professionals had a considerable say in production. The 'production codes' of the professionals were supplemented by similar conventions of 'well-structured writing' from the educational technologists as we shall see.
Student responses had to be planned in a rigorous way too. Unpopular or ineffective teaching could not be detected or remedied as quickly as in face-to-face teaching. Assessment could not be left until the end of the course, nor simply added as and when necessary. Assignments too would have to be printed, distributed, and graded on a massive scale, and organized well in advance. Student responses would have to be sought deliberately and systematically, using devices such as postal questionnaires and other types of 'feedback', both to plan 'at a distance', but also to give the appearance of a self-critical evaluation of what could still be seen as an experiment. Results would need to be processed and 'fed back' to course teams and management bodies in order to influence subsequent design and policy.
Educational technologists were thought to have had models for these processes because of their experience in the design and development of multi-media programmed learning. The relevance of this experience became more debateable, in fact, as the specifics of large scale university education became clear, but the first attempts to organize the teaching system seemed successful enough. Following the principles of classic rational management techniques, the early educational technologists performed a 'task analysis' and produced a detailed production schedule. This involved identifying a number of discrete subtasks involved in producing the finished course unit or TV programme (for example, the need to produce a number of drafts, to arrange specialist editorial or printing inserts, to book studio space, plan location filming, arrange interviews, and so on). Then these tasks were arranged in a rational sequence, resulting in a detailed and elaborate flow diagram which indicated precisely which of these tasks had to be completed by certain weeks and in which order. An example appears in one of the published accounts. 2 Each course team received a copy of this formidable document in their course handbook: for many academic members, the long list of deadlines and the complex set of procedures devised by the educational technologists were their introduction to the modernized procedures of the OU system.
For the educational technologists themselves, this approach was simply the obvious way to organize course production: the schedules merely rationalized what had to be done anyway. In the first months or even years, the pressure was intense: the OU had to be producing courses as quickly and as professionally as possible, in order to stave off any threats to cancel the whole project. A general election was imminent in 1970, for example, and the then Shadow Chancellor of the Conservative Party, Iain Macleod, was a strong opponent. 3 Given the novelty and precariousness of the whole enterprise , it is not surprising to find little resistance to the rational production schedule, despite the considerable demands being placed on staff and the decided 'intensification' of their work. Rationalization today is much more familiar in its effects on academics, of course. 4
It is already possible to detect conceptions of teaching and learning in these apparently neutral organizational procedures, moreover. For the scheduling to work, the production of course materials must present only the same sorts of abstract problems as the production of any other materials, if the same rational management techniques are to be used. An implicit view of education is involved here, one which was to become much more visible in later work: education is being seen as a matter of providing students with efficiently produced packages of correspondence materials and broadcasts. The educational purpose to which those packages were to be put is seen as secondary. The means of gaining an education - the course materials - have become the major priority and have quietly taken precedence over the ends to which they lead. Efficiency has been gained as a result of a swapping of means and ends, a 'means-ends displacement'.
This kind of displacement has often been identified as characteristic of bureaucratic organizations, 5 but a specific point about its effects can be made here. As one of the Professors in what was to become the Institute of Educational Technology (IET) argued, conceiving the problem of production in this way produces a particular constraint upon course design. 6 In the early days at least, academics were writing course units which were to be printed before units that actually preceeded them in the final course. This can make perfect sense in terms of ensuring a steady flow of material to the printers, and keeping all the academics in the course team usefully employed writing rather than waiting for earlier sections to be completed, but it can seriously compromise the logical, conceptual or pedagogical development of arguments in the course. Academic sequencing took second place to sequences based on smooth production of the finished items.
The teaching system did give too much influence to production constraints, in the early days at least, to the detriment of the requirements of good argument. Arguments are expressed in the course materials, but are not identical with those materials. Nevertheless, the teaching system reduced the one to the other: the production schedule was designed to produce a mechanical sequence of physical products and to ensure that, say, Unit 6 arrived in students' homes in Week 6 and coincided with TV programme 6. Many of the claimed innovations in course production should be seen as aiming towards the same sort of administrative goal. The course team, for example, has been hailed as an innovation solving some of the problems of integrating different disciplines, 7 but for educational technology all this need mean in practice is that contributions from different disciplines can be integrated for the purpose of producing a rational sequence of course units. Tensions between different disciplines and approaches are as integral to, and as unresolved by, OU courses as any conventional ones. The pressure to produce materials on time is as large a constraint on open discussion of rival perspectives at the OU as at other universities and colleges. Indeed, an official ideology of integration, sometimes apparent as a convention of 'good design', might paper over the cracks in a misleading and 'closed' way.
Towards 'Effective Teaching'
Having solved the initial problems of organization in this way, the educational technologists were accorded full faculty status with the establishment of IET in 1970. Each course team was to include an educational tîchnologist as a full member whose role was to give detailed advice on course writing and assessment design. Some of this advice will be examined later. There was also a spate of papers and articles establishing and developing the discipline of educational technology itself, many still unpublished (or obscurely published). Examination of these papers reveals the notions of education behind the specific advice to teach and assess 'effectively'.
There has long been a specialist literature devoted to the pursuit of 'effective teaching' in programmed learning and its applications. This was the source of inspiration for educational technology at the OU. The work had applied the same rational principles of task analysis and sequencing that have been outlined above, but this time to instructional materials directly. In this context, the subtasks involved in the 'mastery' of a particular skill have to be specified and sequenced. A considerable body of experimental, empirical experience had been accumulated to approach problems such as how 'big' each 'step' in the sequence should be, and, more generally, how different versions (branches) of a program could be developed to accommodate different student 'entry behaviours' and 'learning styles'. The detail and complexity even of behaviourist work in this tradition should not be underestimated. 8 Initially, this work was simply applied to the question of 'effective' teaching practices at the OU too.
Early critiques were soon launched at conventional ways of communicating arguments in academic life, for example. Conventional lectures were seen as extremely ineffective. Instead:
'The main teaching points must be fully explained, misleading statements and irrelevant scholastic displays must be eliminated. There must be no mistakes, non-sequiturs, gaps, or any other defects in the arguments. All written materials in fact need to be well structured and self-explanatory, and pitched at the right level of difficulty'. 9Early design decisions were clearly influenced by the logic of programmed learning too: the concept of a course unit corresponded to a 'step' to be taken in a fixed time each week, for example.
Some compromises with the principles of programmed learning are also apparent, however. Although the planning documents had mentioned 'individualized' instructional materials as one of the major contributions to effective teaching, serious difficulties soon became apparent. Course production was costly, and multiple versions of courses could not be permitted. All that could be done, it seemed, was to offer some materials of different degrees of difficulty within the unit itself (what the Science Faculty called 'black pages', designed for those first year students with some previous knowledge of science). This represented a serious retreat from one of the central premisses of programmed learning - all that educational technology could do was to make the standard courses as effective as possible in some more abstract sense, as a substitute for 'individualizing' them. This must have meant a serious loss of teaching effectiveness, according to its own principles and practices, yet the argument of cost, the anxiety about survival, and the fundamental pragmatism of the approach helped educational technology adjust. However, at a very early stage, it was clear that a teaching system could be cost-effective or 'teaching-effective', but not both together. As with the examples above, academic considerations had been 'displaced': instead of designing a teaching system to be effective, the problem now became one of servicing an existing system to make it work effectively.
A Self-Improving System
As another legacy of programmed learning, the early courses were intended to follow a model of curriculum design based on behavioural objectives. These represent the subtasks that together produce overall mastery (although a serious problem arises when considering exactly how objectives do combine 10). These subtasks are supposed to consist of tangible, specific 'behaviours' which are outcomes produced by students learning from the educational materials they have received during their course. Behavioural objectives are the equivalent of the tangible subskills involved in acquiring some complex performance skill, such as learning to use a machine. This equivalence is revealing in that it involves another example of the reduction which is becoming familiar - teaching, under the behavioural objectives approach, has become instruction, the effective, one-way communication of instructional materials, a process of 'telling' as an early paper calls it. 11 Given a renewed current interest in instruction, skill training, and the behavioural objectives approach, especially in the provisions for school leavers and others 'in transition', it is worth examining the arguments in this paper in their essential form.
The paper argues that all communication is complex and involves difficulties, even for 'telling', which is defined as the communication of instructions, 'one-way', requiring no actual dialogue with the receiver. In telling, the relevant characteristics of the receiver have to be estimated, and this can be done more effectively than at present. The sender, too, can improve the likelihood of being understood by using various systematic instructional and other 'control procedures'. The paper claims that the 'necessary error-detection and error-removal procedures do in fact exist', and, although these are not specified in detail, the reader is referred to the work of Lewis and Pask, enabling a reasonable guess about their nature (see below). Apparently, there is every chance of the imminent development (in 1969!) of an 'effective technology of telling', with its rigorous 'practical procedures for detecting and forestalling important errors of interpetation'. The authors conclude by saying that '. . . a great deal of telling needs to be done in modern technological societies', and '. . . it is often easier than we think to dispense with feedback and hence reduce teaching to telling. ' (added emphasis). Lewis was still saying something very similar even after considerable debate and criticism, especially from the advocates of 'dialogue'. 12
In the OU system, however, some sort of feedback was possible in practice. Postal questionnaires have been mentioned already. One major type in use was the Course Unit Report Form (CURF) which required large samples (sometimes censuses) of students to reply to (mostly) fixed-choice questions about the uses made of components of the course, radio programmes for example, or questions about the 'difficulty' of units. 13 A more specific 'rapid feedback questionnaire' (RFQ) was also developed for some of the Science Faculty courses, 14 asking quite detailed questions about problem areas identified by the course team, such as the 'nomenclature' of a particular section.
In another exercise, units were read in draft form by panels of 'developmental testers' who could offer both 'fixed-choice' and 'open' responses. Further feedback could be gained from local tutors, or from more conventional sources such as External Examiners. In practice, such feedback was highly variable, but it appeared to be unusually systematic and 'objective', closely akin to the market research of established businesses, which was precisely what was intended,of course.The final link to a rational course production process had been added: specifying behavioural objectives acted to identify tangible goals to be achieved, and gathering feedback data (including student assessment scores at first) provided a means of establishing whether the goals had actually been achieved. Educational technology could claim to be offering a 'self-improving system'. 15
The model was to be used universally in the OU system. In response to cost-conscious anxieties about the 'usefulness' of TV programmes, for example, educational technologists would first request a list of behavioural objectives , and then use the list as a means of evaluating the 'relevance' of the programme in question, by estimating the relations to the objectives for the course as a whole.
For TV programmes specifically, much thought was devoted to trying to isolate objectives which were important but which could only be achieved by visual presentations. If such specific objectives could be listed, there would be a rational basis for the construction of strictly necessary TV programmes only. It is surprising how limited these early attempts to justify TV were. The analysis of visual materials as bearers of connotation or context, signifiers in their own right, remained untapped by the behaviourist psychology of the approach, although some members of course teams were pursuing such analyses at the time. Educational technology limited itself to literal 'objective', denotative meanings, and would have dismissed connotation, metaphor, or formal 'aesthetic' effects as hopelessly subjective, unscientific, and therefore irrelevant. The paradoxical fate of those OU courses which address and celebrate these dimensions from within the OU system, which does the opposite, will be discussed later
The 'usefulness' of face-to-face teaching in local tutorials also came under scrutiny. Senior administrators tended to worry about the expense of public broadcasting, but the educational technologists tried to focus on face-to-face elements as the least justifiable area. To fuel their concern, there was rivalry between IET and Regional Tutorial Services (RTS) over funding and the share of the budget (in which RTS triumphed in the end), but disagreeements also arose over the role of 'telling'. For IET, it should have been possible to make written materials so effective that face-to-face contact would have only a 'remedial' role, as a final 'control procedure' for the problem student, a view supported initially at the Vice-Chancellor level and prominent in the early proposals. 16
If the system really were self improving, even this remedial role would soon disappear:the persistence and growth of RTS acted as a tangible critique of and threat to the basic tenets of educational technology. The issue was to be settled in an ostensibly rational way - the activities of RTS would be evaluated, claimed functions put to the test, supposed objectives listed and then checked.
It is impossible to ignore the 'political' aspects of rational design and evaluation exercises. In internal struggles between Faculties, a demand for the rational evaluation of rival activities means much more than the application of neutral techniques to improve effectiveness. In his account of the attempts to evaluate some of the activities of RTS, Thomas provides a discreet analysis of the politics of evaluation: RTS could refuse to cooperate at all in some exercises, or could redefine the terms of the evaluation. 17 RTS launched its own journal 'Teaching at a Distance' (and, later, 'Distance Education'), and many issues contain arguments for the intangible yet valuable outcomes of face-to-face teaching. 18 IET in turn made many attempts to 'colonize' regional tutoring, to redefine it as 'remedial', to rationalize and control it as an activity, 19 and, above all to control the effects of tutors' assessment practices (see Chapter Three). These activities made it hard to sustain the claim that educational technology operated on purely 'objective', disinterested criteria and procedures, of course. The same undertones are present in discussions of attempts to 'evaluate' summer school teaching, 20 or preparatory courses. 21
Resistance and challenge arose in course teams too. In making their case, some educational technologists realized that their procedures failed to meet a number of objections and practical difficulties, even where course teams were willing to follow their advice. Deriving behavioural objectives from more general statements of intent, which typically referred to specifically 'educational' goals such as 'encouraging students to understand and discuss arguments' presented problems. There were no simple rules or procedures to perform these transformations (not really surprisingly, of course, since this is a problem of notorious complexity). It was either simply 'assumed' that objectives would be derivable from general aims, 22 or it became a matter of 'experience'. These embarrassingly imprecise 'procedures' caused both controversy and a loss of legitimacy.
Educational technology could provide only a few technical guidelines for the framing of objectives, once derived, such as an insistence on the use of certain verbs, drawn from Bloom's taxonomy, 23: 'verbs of action such as to write, to solve, to construct, to select, to identify, to compare'. 24 These verbs still appear in, and lend an air of precision to, the objectives in new curriculum initiatives for those 'in transition' from school to work (or unemployment), 25 but they work only as a double operationalism: 'behaviours' are to serve instead of learning outcomes, and then 'verbs of action' are operational indicators of 'behaviours'. And as one repentant educational technologist says in a later paper, specifying objectives of this kind is '. . . quite fruitless until you were also willing to take into account the huge network of knowledge and understanding that lay behind . . . [these]. . . actions'. 26
Other guidelines reveal other problems. Writers were urged to specify conditions under which performances should occur, and to indicate an expected level of performance, in order to increase precision, as in this example: 'Given the names of the concepts listed in Table A (condition), the student should be able to identify correct definitions (behaviour) of at least 70 per cent of the concepts (performance level)'. 27 Yet in order to achieve precision of this kind, the student has become tied to trivial, and thus apparently unambiguous, manipulations of what the text provides. The example also demonstrates, perhaps, the 'fruitlessness' and the largely ritualistic nature of the use of 'verbs of action' like 'identify' as 'behaviours'. There is also a hint of some of the painful circumlocutions of later attempts to over-specify the content of simple skills in, say, some prospectuses for youth training schemes. 28
Difficulties emerged at the next stage in course design too. Having arrived somehow at a list of suitable objectives, the course team should then select suitable teaching strategies (and even 'tactics' and 'methods' according to one version, 29) supposedly from a wide range of alternatives. Yet it has already been seen that choice was severely limited by the factor of cost: any 'individualized' or 'branched' teaching sequences could not run. Further, no experience was available in teaching unconventional students in a 'distance' teaching system, and the promised flood of detailed and direct knowledge from feedback exercises was not forthcoming. In these circumstances, 'choice' of teaching strategies is rather a euphemism - many course writers must have relied on best guesses or their own, allegedly flawed, experience in conventional universities after all. Educational technologists may have urged writers to rethink their practices and assumptions, and have offered whatever alternatives sprang to mind, but this is not the calm and rational selection of available alternatives, guided only by considerations of 'effectiveness', as in the model. Indeed, the realities of course team life seem quite different, as later files suggests.
As suggested above, feedback data proved disappointing in practice too. Developmental testing was always variable, 30 and tended to suffer from the drop-out problems of all panel studies. Questionnaire based data faced the usual problems of interpretation too. As market research only, the exercises were able to sidestep some of the interpretive difficulties raised by questionnaire data in academic research, but even so, important limitations emerged. Only the broadest picture of student responses was sketched by the data, with little clear guidance to inform policy. In one case, CURF data were apparently useful in showing that a particular unit in the Science Foundation Course was 'very difficult', and the Course Team decided to advise students to omit it, 31 but even here the data only confirmed an already strong doubt in the minds of the Course Team.
Finer adjustments were impossible to ground on the data: if, say, 50 per cent of the sample found a unit 'difficult', it was not clear whether the unit should be modified and, if so, precisely how to do it. More generally, feedback data always had to be interpreted and discussed: a high level of reported difficulty might be desirable or inevitable, for example. 32 With the more detailed market research like RFQ, everything depended upon whether the course team knew the right questions to ask in the first place, as usual with questionnaires. Some 'right questions' are likely to arise in the course of an activity which educational technology valued hardly at all - critical research or theorizing about education, especially when designed to uncover 'unintended consequences' or student-initiated strategies. Trying to be directly and immediately 'useful' resulted in gaining simple and 'objective' data which were so difficult to interpret or so insecure as to be of very limited use after all.
As in other universities, student assessment scores were also seen as an important area of feedback to gauge teaching effectiveness. Once again, however, ambiguities soon emerged. A very low pass rate could be interpreted as an indication of ineffectiveness of the course, but equally as an indication of the unsuitability of the students. Even if the former interpretation were preferred, it would still be difficult to know how to proceed: what if the assessment items themselves had been badly designed, made ambiguous or unnecessarily difficult, or had been insufficiently related to the course materials? These possibililties led to some very interesting developments at the OU, explored in the next chapter. They end in a contradiction that was not suspected in the early attempts to use assessment scores exclusively as a diagnostic or 'performance indicator', and which seems unsuspected in current attempts to do the same. It seems possible to design assessment rigorously, to control any variations in difficulty for example, but only by losing the diagnostic function altogether, and turning student assessment into a matter of grading students in pre-established rank orders of various kinds.
At the OU, what has been termed 'hard-line' educational technology, 33 was largely abandoned as a result of these problems, and remained as an idealization, or ideology, of course design instead. This was still 'useful', of course, since it provided a scientific and experimental public image. Yet within IET, although problems were noticed, they were still not grasped clearly. Instead, attention became focussed upon the limits of the particular 'behavioural objectives' variant of the approach. These limits had been widely discussed and criticised, possibly nowhere better than in the OU's own Education Studies courses (see E283 and E282 especially). A more sophisticated model appeared to offer a new rational solution to course design.
This approach is still relatively unknown, save for some slightly obscure published papers, and some references in a recent 'popular' book about computers by one of the main contributors. 34 The general principles are familiar, however. Professor Lewis, in charge of the project, shared the basic concern for effective communication, and had written one of the key early position papers on 'telling',discussed above. But his model drew on the logic of man-machine interaction, on the work in 'artificial intelligence', embedded not in the managerial and instructional work of programmed learning, but in the practice of computer programming. Whereas the earlier model took as its test of successful communication the ability of the human recipient to perform some tangible and unambiguous task, Lewis's test concerned the successful operation of a computer program.
It is well known, even by non-specialists, that ambiguous, insufficiently rigorous, or 'noisy' programs, or draft programs which make the wrong assumptions about the capacity of computers, produce poor performance. Successful programming requires adherence to strict principles of communication. One is the principle of 'closure' which asserts that all the basic information needed to perform complex tasks must be provided in the initial input, and there must be logical rules, in the operating language concerned, to guide combinations of basics.
Lewis and Pask had argued that the parallels between computer programming and human communication were not simply fortuitous or heuristic: human psychology itself was governed by cybernetic principles. An adherence to general systems theory underpinned the whole enterprise. Pask's recent work argues this in a typically flamboyant way, suggesting that humans and machines are evolving to produce a new common species - 'micro man'. 35
Lewis could demonstrate the practical and immediate power of the approach in his early work for the Civil Service, which transformed some complex tax regulations, written in bureaucratic prose, into a simple 'algorithm' or 'decision tree'. This required readers merely to answer 'yes' or 'no' to a set of basic questions and led them via branches to the answer they sought. 36 In effect, Lewis was trying to develop this undeniably successful early experiment into a procedure which would make the academic language of OU units into effective communication as well.
Pask had had experience in using computer programs as a teaching device, and a brief summary of one rather obscure example will explain his approach. The first step is to scrutinize available texts or authors to obtain 'theses about the subject matter' , in this case probability theory. An 'entailment structure' is developed as a 'canonical representation of these theses, or an approved set of them (for instance theses sanctioned by a Faculty Board or attuned to a culture)'. 37 This process is much facilitated by being able to 'talk with the authors', however.
The next step is to 'indoctrinate the expert with respect to the 'metalanguage' of relational networks'. This metalanguage consisted of 'certain relational operators' which combine the theses together into a network in such a way as to guarantee closure. The expert in probability theory, having learned the metalanguage, is 'required to express the relations he asserts between his topics in these terms, as a graph', and, later, to impose some order of priority to the nodes of the graph in order to help develop teaching sequences. This final stage also involves 'sanitization', 'the elimination of psychologically engendered inconsistencies, most of which are due to limitations on short term storage. . . [of human memory]'. 38
Being able to develop 'expert systems' like this has obvious advantages outside teaching applications, and these have been much discussed. For their own work, Pask and Curran describe a remarkable 'science fiction' computer simulation where the strategic capacities and decision-making processes of US Airforce pilots were modelled as they simulated combat. The game ended with the computer program making combat decisions on behalf of the pilots it had been modelling, as a test of its validity and predictive power. Apparently, the decisions made by the program were exactly similar to the human ones, made much more quickly and without any emotional reactions or other 'noise'. 39
Military funding has long been of importance to 'artificial intelligence', but perhaps best known are the commercial implications of 'expert systems' of this kind. These are better understood, possibly, in terms of deskilling and the shift in control that such technology can bring. 40 Pask and Curran prefer the more utopian versions of a 'post-industrial' future, however. 41
In educational settings, modelling the student as well is equally important, and Pask's probability theory program provided feedback loops to check on students' progress and guide them in developing their expertise. Students were even permitted to 'innovate' by producing novel explanations of 'theses' , as long as these could be justified later or agreed by the expert. This clearly goes beyond the rather conservative and limited use of entailment structures discussed so far: systematically generated 'novelty' would display the conventional nature of existing expert knowledge and begin to raise doubts about the authority of 'theses sanctioned by a Faculty Board'.
However, the main point of the program was to teach concepts effectively rather than generate systematic 'novelty', and there is some evidence to suggest that closely controlled, computer-regulated sequences of learning were indeed more effective than 'free learning'. 42 In one of many examples of the debatable appropriation of ordinary language, Pask describes the closely regulated teaching option as 'conversational learning': the casual, open-ended, egalitarian 'conversations' of ordinary usage are in fact almost the opposite kind of encounters to the closely focussed 'diagnostic', information-processing encounters with Pask's computer programs. Pask sees 'conversational learning' as more adequate for modelling human thought processes (as against, say, behaviourism), but we are still far from the capacities described in, say, Habermas's account of conversation as unrestricted communication designed to raise and test validity claims (discussed elsewhere).
As in this experimental work, academic arguments at the OU also could be conceived as combinations of basic concepts or 'operations' in the 'knowledge structures' approach. Academic arguments could then be reconstructed as a pre-selected network of logical connections between the basic concepts. This would clearly facilitate rigorously effective teaching. The principle of closure, for example, would be used to avoid redundancy of information as well as guiding the tight logical unity of the arguments. 'Irrelevant scholastic displays', 'non-sequiturs' and 'gaps' could be identified and eliminated at last. Students would be asked to do only those complex tasks which were known to be possible on the data base they had been given, and this could, for example, control the level of difficulty of a unit in a rigorous way, and minimize the unequal effects of previous experience. Learning would be simplified and standardized - any student would need only to learn basic concepts and the few terms in the metalanguage which controlled the combination of these concepts. Teaching could proceed in a rigorously controlled sequence to gain the advantages over 'free learning' discovered in the experiments.
Once students had learned the rules and gained access to the full network, academic argument would be demystified, the content disentangled at last from the affectations and stylistic camouflage of academic form. Students could use the network to pursue combinations of their own, such as 'applications' of, say, social science to their personal interests and positions, as in the radical variant of 'open-ness' discussed in the previous chapter. 43 Many of the problems beginning to be addressed in sociological analyses of the elitist nature of 'school knowledge', 44 or the obfuscations in teachers' linguistic styles 45 might be solved by this approach: instead of trying to establish an 'intercultural classroom' to thrash out translations between academic and lay discourse, why not solve the problem by 'indoctrinating' teacher and learner into a common metalanguage pared down to universal logical terms ?
'Knowledge structures' could provide rigour and guidance for assessment too. As diagnostics, Pask would ask students in his experiments to replicate parts of the entailment network, or perform some task which is possible within the network, such as deducing a minor 'novel' combination of concepts 46 or links between concepts. More complex combinations would require greater levels of expertise and previous knowledge, leading to a possibility of a rigorous way to arrange assessment tasks of different levels of difficulty, provide some agreed knowledge base for these tasks, and minimize the notoriously subjective effects of 'essay style' or 'question interpretation'. Structured course materials were to be combined with assessment techniques precisely in this spirit in the well-developed theories of assessment discussed later
It is now possible to see why advocates of 'knowledge structures' claimed a considerable advance over the older behavioural objectives model. Behavioural objectives represented observable parts of a whole network of concepts and deductions that students had to master if they were to attain the specified stages. Instead of just listing certain stages to be attained, making available the whole net overcame some of the unnecessary restrictions on student freedom to explore rather than just arrive at fixed points, and concentrated attention on the cognitive skills required to attain mastery in educational tasks, rather than on the performance of required behaviours in instruction. 'Knowledge structures' would provide rigorous principles of course design, evaluation and assesment, at a more advanced level, technically, and with none of the loss of student 'creativity' of behavioural objectives.
These claims must be taken seriously. 'Knowledge structures' do allow for student uses of networks of knowledge, in principle, and do raise critical implications for the ways experts or pedagogues control and package knowledge. There are genuine potentials in the approach. But the context, and the ways in which the scheme was to be mediated through an existing organization are crucial in determining whether any critical potentials are released in practice.
In the OU context, conservative ones were developed instead : 'knowledge structures' became techniques for facilitating effective but one-way communication. This became clear as a result of some serious 'practical' problems with the approach as soon as it was brought from the controlled conditions of the laboratory and from the uncritical atmosphere of the enthusiasts who had developed and tested the procedures.
The transformation of academic language into structured metalanguage produced problems, for example. This is not surprising given the substantial history of flawed attempts to clarify transformational rules for translating one kind of discourse into another. 47 Transformational problems are even to be welcomed, perhaps, as an abstract safeguard of human language against encroachments from artificial intelligence too, but this stands only as long as abstract, 'philosophical' criteria determine the outcome of the arguments! At the OU, pragmatic solutions to these problems were preferred. It has already been seen how the 'experience' of the writers, or the sanctions of a Faculty Board can serve to effect the necessary operationalisms. Pask himself was to refer any problems ultimately to the 'numinous authority' of the Course Team. Other members of IET were to offer even blunter advice: those academic subjects which could not or would not be rendered in the metalanguage of 'knowledge structures'should not be taught at all at the Open University !The use of power of this kind is surely a major solution to the problems of applying new technology more widely, of course.
A less drastic and revealing solution was to try to establish the academic bona fides and widespread applicability of the 'knowledge structures' approach. One participant collected a series of examples of the use of diagrammatic representations of arguments, drawn from the fields of 'anthropology, theory- building, mathematics, philosophy, biology, information science, artificial intelligence, education'. 48 These examples were then discussed as early attempts to represent 'the structure of knowledge'. In a similar vein, a number of 'coherent procedures for interpreting texts' were reviewed, equated, and also seen as early attempts to present schemes for 'conversational learning' in the Paskian sense. Everything led to the triumphant emergence of 'knowledge structures' as the culmination of all these attempts in all these fields. The whole argument really depends on whether diagrammatic representation or coherent procedures were essential to the examples, however, and on whether 'structure' meant the same things in all these cases. In one case, for example, a diagrammatic structure was really used, in the original text, to parody and rebuke the tendencies towards closure in linguistic philosophy, 49 in another 50 interpretive procedures were based on Weberian ideal types which are not really just 'scientific' routines. 51
This rather selective abstraction from varied examples arises from the special interest in rigorous and effective teaching at the heart of educational technology at the OU. In the dichotomous terms of the enthusiasts, there is either rigour and 'procedure' or mere intuition and relativism. Educators either provide rigorous rules and procedures for students or 'simply chuck [them] in at the deep end'. 52 Any attempt at all to progress beyond relativism and indifference to the fate of students can be counted as a step in the right direction! The legitimacy of 'knowledge structures' is also guaranteed in this exercise as somehow immanent in earlier academic traditions, of course.
Another dichotomy appears in a rather rare discussion for educational technology, the 'philosophical' issue of the relation of concepts to the empirical world. Normally, this issue is 'solved' by the devices discussed above, but in the paper discussed above, a brief criticism of 'empiricism' is offered ('crude', and 'limited too severely by the bounds of present experience'), and in the process the author comes close to developing an argument for the use of 'knowledge structures' to break with experience and convention, and thus edge into critique. However, at the last moment, the piece opts for a perceived 'opposite' to empiricism, an equally limited rationalism:
'. . . knowledge structures. . . will shift the focus of attention towards the internal logic of the subject matter: a welcome change, and a fruitful escape from the problems entailed by 'rock bottom' empirical observations'. 53
In its most 'practical' guise, educational technology offers almost no transformation of the practices it encounters, and confines itself to tidying up sequences of argument, improving punctuation and layout, and generally editing minor errors and inconsistencies in the name of fairly low-level conventions of 'effective writing'. These activities are valuable, and more than one academic at the OU probably owes to them a public reputation as a clear communicator. Yet these are not the radical practices that would overcome the flaws of conventional teaching, and many conventional teachers (and editors, printers, and producers) possess these skills already. The more abstract 'theoretical' 'research-based' work can offer procedures to reconstruct arguments and subject them to some kind of logical, philosophical, or even aesthetic critique, but even here, 'practical' considerations are not far in the background and limit the analysis often before it has had a chance to develop.
Of course, implicit in these points is an alternative view of theory and its role, one which follows Habermas in seeing theory as critique, as discourse 'dominated by reflection' rather than 'dominated by practice'. 54 This view is expanded in the final Chapter, but it is used here to make an 'internal critique' of educational technology: by not being sufficiently reflexive, educational technology ceases to be very practical when it is 'applied' to new contexts and conditions.
Yet compared not to 'critical theory' but to conventional pedagogical practices, educational technology does look critical, analytic, and powerful. Educational technology was able to expose so many of the assumptions of conventional practice, and perform really useful work in testing the implict claims to effectiveness of conventional teaching and, above all, student assessment. Compared to some of the outrageous 'scholastic displays' on offer in the privacy of conventional universities, the goal of clear and demystified argument still seems most attractive. Compared to the vague and amateur way in which student assessment is still organized elsewhere the precise and professional testing of the effects of different assessment policies that went on at the OU still seems most valuable.
It should not be thought, moreover, that all the objections to educational technology in practice were based on appeals to more reflective analysis, as is being argued here. One source of opposition was simply a conservative desire to retain the old practices and to fight off even the limited critiques of educational technology. Another source of oppositon arose from what might be termed 'progressive' stances which opposed notions of 'creativity' and 'subjectivity' to the encroachment of excessively rational specification, analysis and test that educational technology seemed to threaten.
Both conservative and progressive stances are themselves flawed, however. Conservative positions either ignore the close connections between conventional practices and massive failure rates for 'negatively privileged' groups, or accept these outcomes as a price to be paid for 'proper standards'(or 'quality' in current parlance). Progressive positions can fail to grasp how the effects of a teaching system can impose serious constraints on 'subjectivity' and can thus smuggle back in structured failure, 55 a mistake equal to the insensitivity to social relations in educational technology. More generally, whether 'subjectivity' can still be seen as a source of freedom and resistance in modern societies, whether 'creativity' does not have a 'dark side', also needs to be examined before a pedagogy (or a politics) can be based on them.
Although educational technology did not actually win all its battles with its critics it was able to present itself as more democratic and rational than its conservative rivals, and as far more 'realistic' than its progressive ones. It had already secured a decisive strategic advantage in having been involved in the design and legitimation of the teaching system itself. Once a centralized 'distance teaching' system is established, partly determined by and certainly undergirded with substantial cost advantages as well, educational technology draws support in its turn from the system. Conservative objections seem to wish merely to turn back the clock, or to look fainthearted. Progressive objections look marginal or utopian faced with the massive facticity of a teaching system that 'works'.
It is possible to move to a level of critique that will avoid some of these flaws and suggest adequate restraints on the march of rationality and technology, and this is represented here by 'critical theory'. However, for practical 'political' purposes, critique of this kind is too late to influence the basic shape of the OU system itself. The cognitive monopoly of educational technology can now be challenged, but the politically appropriate time to do so has passed: while the OU system was being established, educational technology seemed plausible enough to legitimize the plans, and had no serious rivals. After the system had been established, critique as a practical matter of suggesting alternatives became unrealistic.
Educational technologists were outvoted on some course teams, and their advice was ignored; some academics refused to cooperate with the demands for objectives or multi-choice test items; Regional Tutors and Counsellors resisted redundancy; the original grand, imperialist designs gathered dust. Even within IET itself, opposition to 'telling' became apparent. Morgan speaks of a 'paradigm shift' in educational technology, towards more student-initiated learning, occurring in about 1973, a move supported by others. 56 Yet the papers in which these heresies appear provide classic examples of the problems involved in liberalizing an existing system: whether 'project work', for example, can ever be widespread, whether the requirements of comparability of standards in assessment will prove to be a major constraint (considered in the next Chapter), and whether student choice really is a workable option remain matters of doubt.
The abiding triumph and ultimate
guarantor of old fashioned, 'hard line' educational technology remains
- the teaching system itself. In that system any 'resistance' takes on
a contradictory air since it occurs within a set of arrangements that embody
and express the very beliefs which are being opposed. Refusals to cooperate
look tokenist and purely defensive; attempts to use the system for radical
ends, by producing 'critical' materials, run the risk of indifference or
incorporation into an academic 'culture industry'; and student 'resistance'
to the official demands of the assessment scheme can turn out to be a very
dubious kind of 'creativity' indeed. The 'softening' of educational technology
might have helped it seem more 'realistic': it really can now appear, as
does the teaching system itself, as the harmless systematization of existing
common sense conventions of academic life, including a certain 'all too
human' vagueness and lack of precision.
NOTES AND REFERENCES
1 See the White Paper -'A
University of the Air' (1966), London, HMSO, and the Green Paper -
'The Open University: Report of the Planning Committee to the Secretary
of State for Education and Science' (1969), London, HMSO. The latter
is also known as 'The Venables Report'.