One challenge posed by the broad mandate that was mentioned by EVA’s management during the site visit and in the self-evaluation is to attain adequate coverage in each of the educational sectors, above all in primary and lower-secondary schools.
During 2004 a total of eight action-plan evaluations were undertaken: two in primary and lower-secondary education, one in the upper-secondary sector, three in higher education and two in the field of adult education. Each evaluation included a selection of schools or educational units. In the same year there were eight ‘knowledge centre’ projects, divided in more or less the same way between the different educational sectors. On the basis of its experience, the expert panel considers this to be a small number.30
The evaluations that are produced therefore have the appearance of being relatively specific measures, in the opinion of the expert panel. Hitherto they seem primarily to have been random samples that spotlight selected themes or subject areas.
This means that the chances of any use being made of evaluation results are reduced in a number of ways, perhaps mainly by the relative paucity of educational institutions involved. Our experience is that two of the more lasting effects of evaluations result from the learning process and the enhancement of internal capacity for quality assurance that participation in an evaluation provides.31 If only a few institutions take part in EVA’s evaluations in the various educational sectors, developmental effects of this kind will be restricted.
The evaluation results also become specific and not obviously of general interest. This reduces their usefulness. Specific course providers who are directly involved may be able to identify with and benefit from the results, but other institutions in the same sector and decision makers are offered findings that may be seen as less relevant and perhaps also less reliable than if they had been based on a greater mass of accumulated knowledge.32