function as a tool to aid individual public sector organisations in their efforts to promote innovation; and
promote awareness of public sector innovation both in general and among individual public organisations.
Damvad’s conceptual framework for measuring public sector innovation consisted of five main elements - inputs to innovation, innovation processes within the organisation, outputs of the innovation process, general outcomes of innovation, and external factors or framework conditions that affect innovation in public sector organisations. The framework is shown below:
Organisational performance (both productivity and quality measures)
Other intangible effects, (e.g. increased trust and legitimacy)
Damvad also proposed a set of seven diagnostic indicators to be constructed from the data gathered for the surveyed indicators and potentially other data sources. These indicators would be used as a diagnostic tool to assess innovation performance of public agencies. The diagnostic indicators were:
The diagnostic indicators provide additional analysis of organisations’ own responses to the survey. These results can then be benchmarked against other groups that have been specified for comparison. There are a number of possibilities here, such as:
Comparison with averages for comparable groups
Comparison with top performers for comparable groups
Comparison with results for other groups (that are perhaps less comparable, but where benchmarking may still be useful).
In all cases, this comparison can also include an analysis of strengths and weaknesses of an organisation within selected areas.
Damvad proposed the construction of an interactive tool that can be utilised to analyse and benchmark innovation in individual organisations. Damvad’s approach is similar to the Korean GII but takes a broader view than just organisational change. It includes issues such as procurement strategies, demand side factors, and mechanisms for knowledge sourcing, learning and implementation.
Damvad proposed a broad classification of organisations by level of government, dividing by level of government and whether the organisation is mainly involved with general administration or the delivery of services to the public:
Central government (including Devolved Administrations)
Frontline services (where organisations that deliver services to the public may either be a part of, or administered by, central or local government).
While stressing the importance of maintaining a common core of indicators across organisations, Damvad also suggested introducing a small set of questions (or a separate module) that focuses on aspects that are specific to individual sectors, such as health and education. These can concern effects of innovations, barriers, specific organisational issues or types of collaboration, and could either consist of a separate module or a small set of additional or modified questions throughout the survey. These sector-specific indicators would be developed in dialogue with stakeholders from respective sectors.
The stages of development of the proposed index is shown below:
Stage 1 - Consultations with users. Gain feedback on user needs, promote the overall idea (across departments and countries, etc.), and initiate dialogue and collaborations with key departments. Users to be provided with a fairly detailed proposal.
Stage 2 - Development work - survey, data and online tool. Developing a survey questionnaire, detailed examination of options for using other data and other data-related issues (e.g. access and administration of data), and the development of a plan for the online system.
Stage 3 – Testing. Testing of the questionnaire (potentially via the web-based tool) and feedback sought from users. Finalisation of the pilot survey questionnaire. Note that the pilot survey is needed to generate indicators for use in benchmarking etc for the diagnostic tool, and this also means that eventual users of the diagnostic tool should answer the same questions as in this pilot study. Hence, this is a very important step. Essentially, final decisions will be made here concerning the survey tool (though not necessarily for the diagnostic indicators).
Stage 4 - Pilot survey. Work in constructing a population and sample may take some time, though this preliminary work can be initiated parallel to development and testing work. The pilot survey needs to be completed before remaining steps can be undertaken.
Stage 5 - Constructing the diagnostic indicators. Careful analysis of the results will need to be done, including development and analysis of various methods for constructing diagnostic indicators. Note that this process will also require any other data sources to be used for the diagnostic indicators. This matching process can begin as soon as the sample is determined for the pilot study. A proposal concerning diagnostic indicators should be presented to user groups for feedback. Work should then be finalised for all three strands: the diagnostic indicators, the data set and the web-based diagnostic tool.
Stage 6 - Final preparations. Thereafter, tabulations should be made (both survey and diagnostic indicators) for a publication that can be released when the diagnostic tool is launched. The web-based tool should also be tested for technical functioning.
Stage 7 – Launch. Launch diagnostic tool, perhaps through a promotional event. A publication to be released at the same time reporting results of the pilot survey, and showing examples of the types of benchmarking analysis that can be conducted based on this data.