History Examinations from the 1960s to the present day



Download 66.23 Kb.
Date conversion18.04.2016
Size66.23 Kb.


History Examinations from the 1960s to the present day
This analysis of the role of examinations in history teaching starts with what we have now and seeks to explain how we got there, as well as why there were a number of ‘dead-ends’ along the way.
Features of the current examination system in history
GCSE

As National Curriculum history is not tested by SATs, we are dealing only with 16+ examinations (GCSE and A level).


GCSE history has the largest entry of the ‘optional’ subjects (33% of GCSE students took history in 2009 – 197,800 entries). There are 3 exam boards, OCR, Edexcel and AQA who between them provide syllabus choice, usually between Modern World and Schools History Project. There are other syllabus options, but across all three boards, these two are the dominant GCSE syllabuses and there is little variation between AQA Modern World and those offered by Edexcel and OCR. Publishers produce text books which are ‘tweaked’ for the specific board – some are written by examiners. The situation substantiates the criticism voiced at the conference on 14 Oct. 2010 that GCSE has become very narrow.
Recent changes to GCSE from 2009 mean that there is a mandated section of British history on the syllabus. Some teachers like the new version (interview Linda Turner and Rob Snow) and others don’t (interview Darren Hughff). Given Darren’s school situation (a very challenging area of Hartlepool) compared with Linda and Rob (semi-middle-class Knaresborough), these reactions are probably to do with an increase in content which will be more demanding for less able children. Coursework done by the pupil outside class has now been replaced by ‘controlled assessment’ accounting for 25% of the marks (Maddison, 2009). The controlled assessment is a written question completed using school-based resources (could be the internet in addition to notes and books, etc.) in school time and supervised by teachers over a set period. The controlled assessment, like coursework, is marked by the teacher and moderated externally by the examination board (who sample the students’ work).
Syllabuses are now called ‘specifications’ and are very detailed – not so much in terms of content (though they are defined very closely in those terms as well) but in terms of assessment of pupil performance.
Another significant criticism (recently voiced by the HPAT group) is that GCSE is disconnected from the rest of the National Curriculum. This is just what the original NC was supposed to avoid as repetition of content was a major flaw in history teaching pre-1990. However, the decision to make history optional after Key Stage 3 has had this effect. Because so many children complete their history education at the end of Year 9 (age 14), the NC has been adjusted to ensure that key national and international events are included (e.g. World Wars). However, there is never time to cover these in detail – pupils who do Modern World GCSE will do the twentieth century in detail again and some will do parts of it at A level too.
A level
A level history has more entries than ever previously (45,066 entries in 2010); on that basis, it is a very successful qualification. It is also highly-regarded and regularly features in the press as one of the ‘valued’ A levels (as opposed to Media Studies, which is regularly pilloried in the press).
However, there are plenty of critics of the current A level regime (see de Waal, 2009). Since 2000, all A levels have been ‘modular’, ie the syllabus is broken down into ‘chunks’ called units with a short exam for each one. It is also segmented by year, with the Advanced Subsidiary (AS) level for the first year examined at an intermediate level between GCSE and A level and the A2 examined at full A level standard in year 2. Initially, each A level had 6 units (3 for AS and 3 for A2). This was amended in 2008 to 4 units (2 for AS and 2 for A2) following criticisms both of the ‘bitty’ nature of the 6 units, and the common practice of re-sits of units which has led to almost 100% pass rates and has been held by some to have undermined standards. Since 2008, restrictions on the availability of resits have been introduced. The substitution of 4 for 6 units has not however reduced the variety of topics included as it is perfectly possible to have a unit which combines two very disparate topics (e.g. Stalin’s Russia and Civil Rights in the USA) which are simply studied sequentially. The A* grade was introduced at A2 level only for those reaching 90% at A2 (and they must achieve an A across all units without resits).
Various criticisms of A level history remain:-

  • The modular system prevents coherent, continuous, cumulative or synoptic learning – all of which were characteristic of the outline courses common before 2000 (see transcript of interview with Eric Evans). The assessment system encourages teaching (and learning) to the test and no more.

  • Examiners’ text books have had a pernicious influence because they promise to enable teachers and students to produce the results – critics claim they are often hastily and shoddily written by examiners who haven’t got the time to do the job properly. They also crowd out other worthwhile text books for A level which stretch students more. Another consequence is that students no longer do wider reading or read the works of leading academic historians (despite the fact that quotations from secondary works are often used in exams).

  • Essay skills have declined due to the lack of consistent practice of extended writing (though most A level history syllabuses do contain a personal study or extended writing for coursework). However, AS level mainly consists of short answer questions which require only up to one page of writing in response.

  • Narrowness of content – Tudors and Nazis have dominated the syllabuses since the 1990s. Since 2008 a number of new syllabuses have been introduced so this may be changing. Partly this focus on ‘Henries and Hitler’ was a result of good resources in those fields of study (where a lot of recent historical work has been done too) which then is perpetuated over many years, as teachers’ expertise is an important factor at A level as well as the major cost of books and other resources.

  • Assessment of A level history has become fiendishly complicated – in order to standardise results and to ensure consistency of marking, mark schemes are now very detailed. Assessment objectives (AOs) apply to different tasks within each paper, so for instance, students must address evaluation of sources in one particular question but not another – if they evaluate sources elsewhere, no marks can be given. There is no examiners’ discretion to reward a talented but unconventional response. In a recent case, an Oxford tutor reported on a piece of history work submitted by a candidate. It was some AS-level written work – large parts of excellent work had been crossed out by the student’s A level teacher with the comment ‘not required for AS level’.

Defenders of the modular A level argue that it has made the subject even more popular and accessible. There are teachers working in adventurous ways at A level (Laffin, 2009, pp.79-84) but Laffin spends her whole time teaching A level History in a large sixth form college. For most history teachers in schools, A level classes form only a small part of their work and the demands of assessment (both coursework and modular exam timetables) have added to those of GCSE and Key Stage 3.



Trends in Examinations
Most of the issues affecting history exams today are the result of structural and administrative changes affecting all school subjects. Briefly, these are:-

  1. Widening of the examination system in the 1960s to include more of the ability range: teacher control of part of the assessment;

  2. Development of new forms of assessment to cater for the wider ability range and new subjects;

  3. The raising of the school leaving age in 1973;

  4. The development of a common examination at 16+;

  5. Regulation of examinations;

  6. Reorganisation of examination boards – now awarding bodies;

  7. Standardisation of the curriculum at 11-16 and the use of criterion-referencing;

  8. The impact of vocational education on schools;

  9. The impact of GCSE on A levels;

  10. The use of league tables of results.


Widening of the Examination System
GCE O level was well-established by the 1960s. It was administered by eight examination boards, all but one connected to one or more universities. O level catered for the top 20% of the ability range. Many schools did have regional affiliations to particular boards – for instance, the Joint Matriculation Board drew most of its participant schools from the North of England, but there was no constraint on schools to choose a particular board. Under the Schools Council, curriculum innovations were piloted with the examinations boards. The Nuffield Foundation provided financial support for a number of subject areas to pilot new curricula up to O level. Usually one of the GCE boards would provide the examination in the pilot phase with others offering it later (this was the case for SCHP).
The development of CSE was a response to grassroots demand for qualifications for children in secondary modern schools. Some local authorities had developed their own systems of accreditation for school leavers from secondary modern schools (see example from Lincolnshire in the files). The examinations were based on the O level with a large amount of factual recall required, but generally students were offered more ‘structure’ in the paper, with shorter questions and guidance on what was required. The certificates were accepted by local employers.
CSE was introduced in 1963 for first examination in 1965. It was supposed to cater for the next 40% of children below O level standard and was graded 1-5, with grade 1 equivalent to a pass at O level. A completely separate system of 14 examination boards in England and Wales was created to administer it. These boards were set up as regional groups of local authorities, e.g. Southern Regional Examinations Board (SREB) which ran the Schools Council History Project exam. Regionalism was important to support the coursework system which was based on the use of Regional Moderators who worked with groups of teachers to mark samples of coursework.
Three ‘modes’ or types of exam were available to schools. The most popular was the Mode 1, which was a board-devised common exam externally marked, though coursework was included in many syllabuses. This was a major novelty involving teachers in marking work for an external examination. Very little training was provided (Tattersall, 2008, p.12). Mode 2 was a provision for a school-designed syllabus which the board then examined – this related to a tiny minority of schools. Mode 3 allowed the teacher to devise the syllabus, get it approved and then examine and mark it him/herself, with moderation by the board. In history, this was a popular option for enthusiastic and innovative teachers (see Andy Reid interview no. 23). Some of the CSE boards made considerable use of Mode 3 – the most extreme example being The West Yorkshire and Lindsey Regional Examination Board (TWYLREB) which was reputed to have administered more than 10,000 Mode 3 syllabuses in its area (Tattersall, 2008, p.10).
Teacher control was an important feature of CSE administration in all modes – even for Mode 1, the subject committee determining the content of syllabuses was dominated by teachers rather than university academics (Tattersall, 2008, p.5).
New forms of assessment
A groundbreaking move came in 1962 when JMB introduced an English Language O level with 100% internal assessment by teachers – essentially a coursework-based qualification (it was made available across England from 1977). It was a genuine attempt to enable assessment to follow learning rather than vice versa. The scheme was demanding on teachers, who had to be trained, attend moderation meetings on top of teaching and marking to the standards required. It continued through permutations into Joint CSE/O level and then GCSE format, attracting 200,000 entries at its peak in 1993 (Spencer, 2003).
In addition to coursework, examination boards experimented with other types of examination, such as objective tests. Henry MacIntosh carried out work for AEB between 1967-9 to trial multiple-choice tests in history. There were many challenges, not least to create questions which would be sufficiently discriminating. MacIntosh tested a series of 90 history questions in the UK and Rhodesia. The main obstacle to the further development of multiple-choice questions was the need for a sufficiently large question bank to ensure students could not prepare for them by using past papers (MacIntosh, 1969, p.32 – see also example of objective test in Economic and Social History, 1971).
The Raising of the School Leaving Age
In 1973, all pupils had to remain in school until the end of the fifth year (age 16) leading to a much higher entry for CSE. The impact can be seen straight away in entries for CSE history – between 1971 and 1974, entries almost doubled from 79,150 to 132,772. The increase in O level for the same years was from 128,190 to 138,581. In 1977, CSE numbers overtook those for O level. This placed a burden on the CSE and O level examination boards to provide for a wide range of ability and to provide alternatives to the ‘sudden death’ formal examination. Mode 3 examinations were introduced in O level but only on a limited scale and with more external control, reflecting the different assumptions about the credentials of the examination (Spencer, 2003, p.115 and Wyatt, 1973, pp.7-8).
Tattersall makes a direct link between the raising of the school leaving age (ROSLA) and an increase in Mode 3 syllabuses at CSE from 1974 onwards (Tattersall, 2007, p.64). Mode 3 entries for history rose from 18,208 in 1973 (38%) to 30,385 in 1975 (44%).
The Development of a Common Examination System at 16+

In 1966, a Joint GCE/CSE Committee produced a report for the Schools Council recommending a programme of research into a common system of examinations at 16+ to be undertaken by both types of boards, assisted by the National Foundation for Educational Research (NFER) (The Schools Council, 1966). In 1971, the Schools Council issued Bulletin 23 recommending combining GCE O levels and CSEs into one examination (Schools Council, 1971). It glossed over the issue of ability range, preferring to retain the percentiles 40-100 as the range to be covered by the new exam, even though the imminent ROSLA meant something would be needed for the bottom 40%. It came down firmly in favour of a system ‘largely in the hands of teachers and therefore [to] be controlled by teachers’ (Schools Council, 1971, p.30). Examination boards were invited to run pilot examinations in 1974. Most of the pilots were not continued, the exception being the JMB which continued to work with the Northern Examining Association (NEA) comprising the four northern CSE boards to offer joint exam syllabuses and papers.


There was some resistance from the GCE boards. One of the most difficult issues to overcome was the contrast between the role of teachers in assessing CSE and the absence of any teacher involvement in assessing O level. CSE boards were organised in a devolved fashion with teachers involved at many levels in the organisation in a consultative and formative role in relation to CSE syllabuses. This was not the case with GCE boards, although they all had subject committees which included teachers, but their work was very far removed from the school level and generally little information was given to schools about how the process of examining worked (see leaflets produced by AEB in the 1970s and 80s to explain the process). The way Subject Committees operated tended to lead to conservatism in the development of syllabuses and examinations – it was estimated that five years was needed to go from an idea for reform to a first examination of a new syllabus (Wyatt, 1973, p.4). AEB in its response to Bulletin 23, expressed doubts about the desire of the majority of teachers to become involved in examining, nor did they think that a common system of examining should be assumed to be feasible (AEB, Dec. 1972).
Another issue significant for the longer term was that of grading. O level grading differed by board until 1975, when a standardised grading system A-E (A-C were passes) was introduced. Before that, each board had its own system (JMB used grades 1-9, with 1-6 being passes). CSE had a consistent grading system, despite the fact that it was examined regionally (there was some cross-examining to try to ensure comparability of standards) and was awarded grades 1-5. It had never been established whether grades 2-5 were equivalent to the grades below an O level pass – this caused problems for schools when deciding on which exam to enter pupils for and affected the status of CSE as an examination (Murphy, 2003, p.184). The question of a grading system for a new examination exercised the Schools Council. Bulletin 23 recommended between five and nine grades – in the event, seven were introduced in GCSE (grades A-G). In their review of the responses to Bulletin 23, the Schools Council discussed grading schemes – see below (Schools Council, 1973, p.11).

The use of joint O level and CSE exams in history
The use of a joint syllabus was pioneered by the Schools Council History 13-16 Project. From its inception in 1972, SCHP was offered as a common syllabus for both CSE and O level, but the examinations were conducted by two separate boards, Southern Regional Examinations Board (SREB) and Southern Universities Joint Board (SUJB), though the latter were reluctant (Sylvester interview extract).
The Waddell Report 1978

Waddell endorsed the Schools Council’s proposals and thought the new common system could be introduced by 1983, with the first examination in 1985. This was one year longer than the estimation of the Schools Council. They would be in overall charge of the new system, with time allowed for the examination boards for O level and CSE to come together to form new joint boards for the common system (Waddell, Part 1, paras.9-13). The comments on history as a discrete subject in Chapter 5 show that some of the progressive changes supported by the Schools Council were adopted by Waddell, though balanced by a continuation of traditional approaches. The introduction to the History Section of the Education Study Group makes this clear:

History examinations should assess the capacity to abstract information from primary and secondary sources, to analyse and synthesise information relevant to an argument, and to communicate the conclusions reached whether in writing or in speech. Reliance must be placed on tried techniques which include traditional essay-type questions, objective tests, oral examinations, projects and course work. (Waddell, Part II, 1978, para.105)

The History 13-16 syllabus was commended as good practice (ibid. para.109). It provided an example of the way in which a single syllabus could be examined even under the two parallel systems. SCHP were already planning to introduce a common sources paper for both its O level and CSE candidates (ibid., para.118).


The need for a ‘range of syllabuses’ for history to be offered by each board was recognised as important within the common system, as had traditionally been the case.

In addition, schools could take syllabuses offered by a board outside their geographical region, as had been the case for O level. The Education Study Group noted a warning that the joint exams piloted by the Schools Council had proved less reliable for discriminating between those at the extremes of the ability range. This was partly due to the format of the examination (ibid. paras 113-4). The Group was fairly conservative over this – whilst agreeing that examinations focused on factual recall limited the most able, they did not give support to the routine use of project work, nor did they endorse the use of multiple choice tests, oral exams or teacher assessment, all seen as less discriminating between candidates. The Group recommended that if implemented, a common core paper with alternatives would be needed to address the range of ability.


The introduction of GCSE
In approving the introduction of GCSE, Sir Keith Joseph insisted on close involvement in the details of the new qualification. In particular, three features were specified:-

  • that the new qualifications should be based on general and subject-specific criteria developed by the SEC with the boards - and approved by himself;

  • that the O level boards in each examining group should take responsibility for carrying forward the O level A to C grade standards into the new scale, while the CSE board in each group should do the same for grades D to G, which were to be based on CSE grades 2 to 5 respectively;

  • that most subjects should be examined through differentiated (tiered) papers focused on different parts of the grade scale, to ensure that each grade reflected 'positive achievement' on appropriate tasks, rather than degrees of failure. (from ‘The Story of the General Certificate of Secondary Education (GCSE)’.

An important corollary of the change was the merger of O level and CSE boards – resulting in a reduction to 4 boards (now 3 – OCR, AQA and Edexcel) in England and one each for Wales and N. Ireland.
The History GCSE exam typically consisted of two timed exam papers (as had the CSE and O level) plus coursework marked by the teacher and moderated by an external examiner by sampling to check the standard of marking. Initially most syllabuses were simply a carry-over from the O level and CSEs which the boards concerned had offered before.
There were a number of important centralising issues associated with GCSE due to the general and subject-specific criteria. The National Criteria for History (SEC, 1985) listed assessment objectives which prioritise evidence-based skills (section 3) including ‘empathy’ (3.3 ‘to show an ability to look at events and issues from the perspective of people in the past’). On content, the National Criteria were less prescriptive, though 4.1 includes a requirement that each examining group offer at least one syllabus covering ‘the intellectual, cultural, technological and political growth of the United Kingdom’. Examiners were no longer able simply to specify start and end dates and call it a syllabus. Assessment had to be a mix of document questions, short answer questions testing knowledge and essay-style questions demanding extended prose. The use of coursework and its ‘value’ in the exam was also specified by the National Criteria (initially at least 20% of the marks). The subject-based criteria were heavily-influenced by SHP approaches to history – reference to evidence-based skills abound, e.g. under ‘Aims’ 2.3 states one of the aims of a History course is ‘to ensure that candidates’ knowledge is rooted in an understanding of the nature and the use of historical evidence’. (For a detailed commentary on the National Criteria, see Booth, Culpin & MacIntosh, 1987.)
In addition, the subject-specific National Criteria included broad ‘grade descriptions’ for two of the grades (C and F) as exemplars. These were expected to be swiftly replaced by detailed grade criteria for all grades against which performance could be assessed (DES, March 1985, paras18, 28-32). The difficulties in achieving this soon became apparent (Tate, 1987, p.78). QCA’s 2009 comment on the effort to produce detailed grading criteria is as follows:-

‘a great deal of energy was expended before it was accepted that assessment and awarding systems designed to deliver “absolute standards” were in practice unmanageable and incapable of delivering the certainties that some had thought possible.’ (QCA, 2009)


Nevertheless, the marking of GCSE and A level have both been heavily influenced by the prescription of grading criteria as a way of ensuring consistency in marking and the awarding of grades. The GCSE was awarded at grades A-G with A-C aligned to the O level grades A-C and the grades D-G with the CSE grades – below G is ‘ungraded’. The A* was added in 1994. Sir Keith Joseph wanted 90% of children to reach grade F (which was ‘average’ at the time of introduction) – this was achieved in 2004/5 (when 90.2% of GCSE candidates achieved 5 GCSEs A*-G).
The trend for Education Secretaries to set targets for GCSE passes has continued. David Blunkett set the goal of 50% of 16-year olds gaining 5 GCSEs at grade C and above. This was achieved by 2003/4, but with the use of vocational qualifications as substitutes for traditional academic GCSEs. More recently, the ‘bar’ has been raised again with the target that 50% of pupils achieve 5 GCSEs A*-C including English and maths.
Tiered papers have never been used in history GCSE, though they are common in other subjects, especially maths (which originally had 3 tiers). Tiered papers allow for some overlap but cap the grade which can be achieved by that tier (e.g. A*-D, C-G). The case against tiered papers in history rests on the argument that one cannot set questions in history to which a candidate cannot respond at the highest level (see Sean Lang interview notes). None the less, the problem of language and access for the weakest candidates is one which is recognised. The challenges of setting questions using documents as well as questions accessible for all abilities were explored by a group of chief examiners in 1989 (SEAC, 1989).
Regulation of examinations
Central control of examinations was gradually extended to cover all academic qualifications at 16+ and 18+. Since 1917, the Secondary Schools Examination Council (SSEC) had overseen the school certificate system, but did not control the boards or their examinations. From 1964, the Schools Council, funded in equal parts by the Government and LEAs, took over the SSEC’s responsibility for exams in addition to its role in the promotion of curriculum development (Tattersall, 2003, p.14 – also see p.97 for information on the majority role of teachers on the Schools Council). It ‘loosely co-ordinated’ the examination system from 1964-81 but it had few powers over CSE which was really under the control of local education authorities, groups of which managed the regional boards. It had even fewer powers over GCE which was in the hands of the university-run boards (Tattersall, 2008, p.14).
The Schools Council was replaced by two bodies, the Secondary Examinations Council (SEC) which oversaw qualifications and the Schools Curriculum Development Committee (SCDC). These mutated into the Secondary Examinations and Assessment Council (SEAC) and the National Curriculum Council (NCC) as a result of the Education Reform Act (ERA) of 1988, their brief being to oversee the implementation of the National Curriculum and its attendant testing system. Powers of inspection were also strengthened, with the replacement of HMI (Her Majesty’s Inspectorate) by Ofsted (Office for Standards in Education) in 1992. Other outcomes of the ERA included the publication of school performance tables from 1993 onwards – league tables as they are known today. The ERA marks an important break with the past, as SEAC was given considerable central power over GCSE syllabuses which became much more specific about the ‘learning outcomes’ to be tested. National criteria were introduced for all subjects, including of course all the National Curriculum subjects. GCSE became the testing instrument for the National Curriculum Key Stage 4 in 1994. Forms of assessment were tightly controlled at GCSE and A level, including the amount of coursework (internal assessment) allowed. A Code of Practice was introduced to govern the administration of examinations by the boards and their awarding procedures (Tattersall, 2003, pp.17-19).
In 1993, the qualifications and curriculum roles were again combined under the School Curriculum and Assessment Authority (SCAA) until 1997, when it was replaced by the Qualifications and Curriculum Authority (QCA). In 2008, the functions were again divided into two, with Ofqual to deal with qualifications and QCDA, the Qualifications and Curriculum Development Agency. This latter quango was abolished by the Coalition Government in 2010 leaving Ofqual to regulate examinations.
Reorganisation of Examination Boards – now Awarding Bodies
From 1953, there were nine GCE examination boards, with the inclusion of the Associated Examining Board (AEB) which had been formed by City and Guilds to offer mainly technical and vocational subjects. The number was reduced by the demise of the Durham board in 1964 and the merger of the Southern Universities Joint Board (SUJB) with Cambridge Board in 1990.
The mergers of boards to facilitate the introduction of GCSE was a major shift away from regional links between exam boards and schools. Schools were much freer to ‘shop around’ and use different boards for different subjects, which had been far less common for O level and not allowed for CSE. The new merged boards and the implementation of National Criteria for examinations curtailed much of the freedom the university-led exam boards had had to initiate new syllabuses and develop the school curriculum at 16+. Their role in GCE A level was likewise gradually constrained by national criteria (the ‘Common Core’) and by the Dearing Reforms (Curriculum 2000). Most universities withdrew from their involvement in examination boards – by the mid-1990s, only Cambridge and the northern universities retained their control over their respective boards. JMB’s formal ties with its university founders loosened when it became part of NEAB and disappeared in the merger with AEB which formed AQA in 2000. Oxford sold its interests to UCLES and the AEB. London took a back seat when its examining arm merged with BTEC to form Edexcel and then lost all links when Pearson (the publishers) bought the business in 2002, two of them (OCR and Edexcel) offering both vocational and academic qualifications (Tattersall, 2007, pp.73-8).
Changes in setting, marking and awarding in examinations
‘The challenge facing assessment for GCSE in history is how to shift an assessment model involving both setting and marking which at present discriminates negatively between some 60 per cent of the school population to one which differentiates positively between some 80 per cent.’ (Booth, Culpin & MacIntosh, 1987, p.54)
Although different boards were supposed to confer over the relative difficulty of their CSE and O level papers, there was no overall standardising procedure. However, before the 1980s, papers were relatively simple in the forms of assessment required and questions were often closely modelled on a standard formula. Thus mark schemes were also simple. However, what was tested was also very limited in range – a mixture of factual recall and prose writing skills. The ability range of GCSE meant the setting of questions was tricky, given the need to offer the most able and least the opportunity to show what they could do. In addition, history exams had always included choice for the pupil on the assumption that all questions were equally hard (or perhaps also the assumption that choosing the ‘right’ questions was part of the test). Great care had to be taken to set questions at GCSE that were both comparable and allowed ‘differentiation’ between the least and most able. The ambitions of those introducing GCSE history to test a more complex range of historical thinking skills meant the demands were even greater on examiners to produce questions of comparable difficulty and discrimination. The use of ‘stepped’ questions became common – these had already been used in CSE exams to enable students to tackle the easier part before the more complex and to allow them to build up confidence as they progressed through the question (Booth, Culpin & MacIntosh, 1987, p.55).
The setting of grade boundaries in O level and CSE was ‘norm-referenced’, that is to say the proportion of students awarded a particular grade would be relatively even over time, whatever the ability range of the particular cohort. This is based on the assumption that in a large examination cohort, the range of ability will always be relatively the same. This form of grading meant that boards could be less specific about what was required for each grade and also it allowed markers more discretion about their judgements of pupils’ work. It also meant that pass rates remained more or less static at O level and A level for many years (for A level it remained close to 70% from 1966-85 and for O level it remained close to 57-8% for the same period).
Criterion-referenced marking is a more objective approach and fairer in that the levels of achievement required to be demonstrated for particular grades are pre-set before marking begins. It allows as many students as make the grade to be awarded it, thus allowing for an increase (or decrease) in pass rates (a necessary effect if the government is to show improving standards of attainment!) and grades achieved year on year as a reflection of the ability of the particular cohort. The demands of criterion-referencing are however more rigid. It is necessary to work out the different levels of achievement in advance of the pupils’ answers more completely than would be done for an O level mark scheme. Comparability is harder to achieve where boards have different schemes of assessment. The risk of sharp swings in the grades awarded is greater (Schools Council, 1973; Booth, Culpin and MacIntosh, 1987, pp.53-4 contain a strong critique of norm-referencing).
During the 1970s, in response to the broader range of ability which needed to be recognised and accredited, profiling pupils’ achievements was in favour with those who felt that the assessment drove the curriculum too much – producing a summary profile (or description) of the pupil’s achievement meant that a variety of work could be assessed by the teacher over a longer period of time (Schools Council, 1973, p.11). The problem, however, was the usual issue about comparability of standards. A later attempt to introduce profiling into the education system was the National Records of Achievement in the 1980s and 1990s but it had no effect on the structure of examinations or their value to schools and pupils (Spencer, 2003, pp.127-8).
Mark schemes based on the work of the cohort, rather than a pre-determined model answer done by an examiner, were developed by the SCHP from 1978 onwards (extract from Harrison interview, 2009). A series of descriptions of ‘levels of response’ from the specific question were laid out for markers by the SCHP senior examiners (Booth, Culpin & MacIntosh, 1987, pp.70-1). This approach fed directly into the ‘grade descriptions’ included in the National Criteria for GCSE. The introduction of GCSE has resulted in standardisation of assessment across all boards, implemented via the regulatory bodies. This standardisation and the use of grading criteria are reconciled by statistical means (via the UMS – Uniform Mark Scheme – see explanation in BBC, September 2002). This approach was applied to A level as part of the Curriculum 2000 reforms (see below).

Changes to the GCSE exam since 1990
The main trend in history exams has been a contraction of syllabus choice. No major changes to assessment were made in the period before 2009, although the style of questions changed (see sample papers). Short courses, designed to encourage students to take more subjects never gained popularity in schools and have largely fallen into disuse. Another significant trend has been the reduction in lower ability candidates doing dedicated entry level (below GCSE grades) courses.
In 2009, new specifications were introduced for GCSE. These combine an outline course, a depth study, a source enquiry and a ‘controlled assessment’ (coursework completed in the classroom).
A new course, the GCSE Pilot is on offer at 100 schools from the OCR. Its approach is completely different to the normal GCSE in History, as it includes a heritage/multimedia project, with 75% of the qualification assessed by the teacher and an examined unit on medieval history (OCR specification, 2006).
Changes to A level history syllabuses and exams
A level history exams have traditionally consisted of two three-hour papers during which candidates wrote four 45 minute essays. There was some spill-over of curriculum innovation in O level history during the 1970s with the introduction of a pilot A level history syllabus, the AEB 673, first offered in 1977. It was expected that students who had experienced Mode 3-style coursework assessment at O level would want this type of assessment at A level. Hence the 673 included both a ‘personal study’ – an extended piece of writing based on independent research by the student on a topic of their own choice – and a ‘methodology’ paper, which included questions on unseen sources. In addition, the students also sat a conventional paper 1 essay-based exam from the range already on offer from AEB. The personal study had already been pioneered in General Studies A level and initially included an oral exam by an external moderator (AEB in TH, Jan.1976, pp.202-3). AEB 673 continued as a ‘pilot’ into the 1990s – it had a faithful cohort of schools, but was never in danger of sweeping away the conventional A level.
Other curriculum innovations included the Wessex Modular Scheme which AEB ran in the West Country in the 1980s. Wessex was an early modular scheme allowing students to combine units from different A level subjects. Wessex A level history had approximately 18 schools involved. Sean Lang acted as a moderator for Wessex history, but was concerned at the time about variations in the standard of coursework produced (Lang interview notes, 2010). Another A level history innovation was the ETHOS scheme, a mixed traditional/coursework syllabus also with AEB. ETHOS was the brainchild of John Fines and Jon Nichol, funded by Nuffield and launched in 1989 (Fines & Nichol, 1994). It had about 12 schools involved. Both Wessex and ETHOS were ended in 1994 following the introduction of the Subject Common Core for A level History (Fisher, June 1995).
Following GCSE, there was the temptation amongst innovators to seek to offer something similar at A level. It is somewhat surprising that the SHP had not sought to introduce an A level which practised their approach to history teaching. In 1995, a group of former SHP curriculum innovators produced the Cambridge History Project (CHP). The Project was arranged around the themes ‘People, Power & Politics’. The course consisted of an overview course on the 17th century and a depth study on the English Civil War, with a contrasting study on the Fronde. Sean Lang estimated that around 50 schools took up the CHP – Denis Shemilt (one of its creators) commented on the demise of the syllabus in his interview (Shemilt, Interview 11, 2009 - extract). Part of the reason for its demise was financial – administering and marking the CHP was expensive for the exam board. The same consideration had affected AEB’s relationship with ETHOS. Ultimately, however, CHP failed to capture the imagination of teachers in the same way SHP had done. Partly this was due to the fact that most A level syllabuses had adopted the use of document source questions by the early 1980s. However, it may also have been due to the value placed on the traditional A level history course by teachers and students. Eric Evans gives some insight into this in his interview when he compares the ‘fraudulent’ O level with the genuine analytical skills required at A level.
General changes in Post-16 exams
The post-16 phase is the location of many failed initiatives – many of them conceived for ‘political’ purposes. In the 1970s, the challenge was to find an appropriate qualification for students who wanted to stay on in the sixth form but who were not going to take A levels. Resitting O levels often led to a repeat of failure or low grades. Intermediate level courses lasting one year were devised to fill the gap, such as the Advanced Ordinary and ‘N’ level exams – these did not gain popularity as they were not a stepping stone to A level. The Certificate of Extended Education was a holistic approach which also failed to gain popularity. Another group needing certification were young people who wanted a vocational qualification but were not sufficiently qualified to get onto a post-16 apprenticeship or technical course. The Certificate in Pre-Vocational Education proved popular – from this can be traced the later development of General National Vocational Qualifications (GNVQs), which were subsequently transformed into Applied A levels. The purpose of this change was to try to improve the status of vocational advanced courses.
Most A level syllabuses and exams changed little before the 1980s and pass rates remained very stable (see Educational Statistics). Grades A-E had been introduced in 1963 with recommendations from SSEC for proportions of candidates in each grade (10% = A. 15% = B, 10% = C, 15% = D, 20% = E, making the 70% pass rate). These guidelines remained in place until 1987 (Tattersall, 2007, p.76). There was a notable rise in pass rates after 1985, which could be explained by the gradual abandonment of norm-referencing and the increased use of coursework at A level (see correspondence with Eric Evans and Arthur Chapman April/May 2010).
Following the introduction of the Common Core, further standardisation of A levels by SCAA took place in 1996 following the Dearing Review, which changed A level exams fundamentally by introducing a modular or ‘unitised’ structure called ‘Curriculum 2000’.
Problems in 2002 with Curriculum 2000

The implementation of a completely modularised system of A levels from 2000 was always going to be a challenge. As shown above, earlier modular schemes had suffered due to lack of scrutiny. The new Advanced Subsidiary qualification in History would be of a lower level of difficulty and usually taken during and/or at the end of Year 1 (though all units could be taken at the end of two years – conferring something of an advantage on candidates who did that). Units were offered in January and June exam sessions with six in all, 3 for AS and 3 for A2, including a coursework unit.


Problems emerged with the marking of the 2002 papers, the first year when both AS and A2 units were being examined. The main issue was the cumulative effect of the AS (at a lower level) and the A2. Since the grades achieved by students in 2001 at AS were generally good and were worth 50% of the final grade, they automatically boosted students’ performance overall, threatening a big rise in the numbers passing and achieving high grades. Allegations emerged that exam boards had adjusted the grade boundaries downwards and this meant that some students missed their university courses as a result (BBC News extract, September 2002). The role of the Qualifications and Curriculum Authority was brought into question, leading to the resignation of David Hargreaves, the Chief Executive. The fracas also contributed to the departure of Estelle Morris as Secretary of State for Education. Hargreaves’ replacement, Ken Boston insisted on computerisation as the way forward for the marking of up to 24 million scripts by the exam boards (Guardian, 11.06.03). A row over delays in the reporting of National Curriculum SATs results by a contracted out company led to the resignation of Boston in 2009.
Whereas the choosing of questions on the A level paper had always been a matter of strategy, decisions about when to enter for which units and whether to resit them became more crucial. At first, unlimited resits were allowed – depending on the student’s purse (most schools and colleges paid for only one resit of any unit). Some teachers favoured entering students for one unit in the first January of the course, to give them a taste of a ‘real exam’. For those successful in gaining a desired grade at this early stage, it reduced the pressure in June when the rest of the AS units were taken. For those with a weaker performance, there was always the June resit anyway.
The chief benefit for students and teachers was the end of unrealistic A level expectations. AS grades gave an indication of eventual grade levels (and were of course in the mix in the final overall A level grade) and that proved a help to university admissions tutors also. Students who had failed or nearly failed at AS were advised not to continue to A2 or to resit AS only. Thus the pass rate for the full A level climbed inexorably to nearly 100% (99.2% in 2010).
In 2004, the Tomlinson Report recommended a new Advanced Diploma (and similar diplomas at Intermediate and Foundation level for 16+ pupils), which would be similar to a baccalaureate acting as an ‘umbrella’ for both A levels and vocational qualifications with a common requirement for maths, English and IT plus a personal project of extended writing and community service. The Tomlinson recommendations came to nothing when Downing Street vetoed any change which might be ‘seen as a dilution of standards’ (Guardian, 24.02.05). The extended project is now available as a ‘half A2 level’ qualification for both vocational and A level students in post-16 education.
In recent years, the focus has switched to the 14-19 age range (see HA Report, 2005), rather than 16-19, with the Nuffield Report (2009) attempting to return to a fundamental question, ‘What counts as an educated 19 year old in this day and age?’

Sources

The Guardian, 11 June 2003, ‘Exam system “unsustainable” warns QCA Chief’.

The Guardian, 24 February 2005, ‘Frustration at a missed chance for reform’.

BBC News Online, Monday 23 September 2002, ‘Q & A: A-level fiasco’.
DES, GCSE A General Introduction (HMSO, 1985)

Nuffield Review of 14-19 Education and Training, England and Wales, Education for All (Routledge, 2009)

QCA, The Story of the General Certificate of Secondary Education (GCSE) formerly at http://www.qca.org.uk/qca_6210.aspx (accessed 20.03.2009)

The Schools Council, Examining at 16+ (HMSO, 1966)

Schools Council Examinations Bulletin 23, A common system of examining at 16+ (Methuen Educational, 1971)

The Schools Council, Review of comments on Examinations Bulletin 23 SC Pamphlet 12 (Schools Council Publications, 1973)

Chief Examiners’ Report 1989, Differentiation in GCSE examinations (SEAC, 1989)

Tomlinson Report, 14-19 Curriculum and Qualifications Reform: Final Report of the Working Group on 14-19 Reform (October 2004).

Sir James Waddell, School Examinations (HMSO, 1978) Parts 1 and II.

The Associated Examining Board, An Appraisal of Examinations Bulletin 23 (AEB, December 1972)

Historical Association, History 14-19 (HA, 2005)
Martin Booth, Chris Culpin & Henry MacIntosh, Teaching GCSE History (Hodder & Stoughton, 1987)

John Fines & Jon Nichol, ETHOS: Doing History 16-19: A Case Study in Curriculum Innovation and Change (Historical Association, 1994)

Trevor Fisher, ‘The New Subject Core for A level History’, in Teaching History 80 (June 1995)

Henry G. MacIntosh, ‘The Construction and Analysis of an Objective Test in Ordinary Level History’ (AEB, 1969).

Michael Maddison, presentation at HA Confrence ‘History in Schools – Present and Future’ (28 February, 2009).

Roger Murphy, ‘A Review of the Changing Views of Public Examinations’, in K.Tattersall, AQA: Setting the Standard: A century of public examining by AQA and its parent boards (AQA, 2003).

Alan Spencer, ‘Changes in the Nature of Assessment’, in K.Tattersall, AQA: Setting the Standard: A century of public examining by AQA and its parent boards (2003).

Nick Tate, GCSE Coursework: History. A Teachers’ Guide to Organisation and Assessment (Macmillan Education, 1987)

Kathleen Tattersall, ‘The Relationship of Examination Boards with Schools and Colleges: a historical perspective’, speech given in Cambridge, June 2008.

Kathleen Tattersall, ‘Ringing the changes: educational and assessment policies, 1900 to the present’, in K.Tattersall, AQA: Setting the Standard: A century of public examining by AQA and its parent boards (AQA, 2003).

Kathleen Tattersall, ‘A Brief History of Policies, Practices and Issues relating to Comparability,’ in P. Newton et.al. (eds) Techniques for Monitoring the Comparability of Examination Standards (QCA, 2007).

Anastasia de Waal, ‘Straight A’s? A level teachers’ views on today’s A-levels’ (Civitas Paper, August 2009)

T.S. Wyatt, ‘’The GCE Examining Boards and Curriculum Development’, paper presented to the Schools Council on behalf of the GCE Examination Boards, April 1973.
Notes of an interview with Sean Lang, 9 November 2010.

Notes of an interview with Ian Colwill, 17 June 2010.


Relevant Interviews in the HiE collection

Denis Shemilt (extract in file)

David Sylvester (extract in file)

Jon Nichol

Eric Evans

Andy Reid



Scott Harrison (extract in file)



N.Sheldon



The database is protected by copyright ©essaydocs.org 2016
send message

    Main page