This section outlines the six exploratory projects commissioned by UK agency the National Endowment for Science, technology and the Arts (NESTA)
Project 1 – Technopolis Group
The paper12 presented a brief literature review of innovation measurement in the public sector and a discussion of innovation in government through two case studies, healthcare and adult social care. It concluded that a public sector innovation index is feasible.
Technopolis gave three options for NESTA to consider with regard to a public sector innovation index: a UK Government R&D Scoreboard, a multi-factor public sector productivity index, and a UK Government innovation scoreboard based on direct measures of public sector innovation. This scoreboard or index could cover inputs, activities and outputs, and possibly a limited number of generic pre-conditions and outcomes. Information would be obtained from a new annual survey, modelled on the private sector Community Innovation Survey (CIS).
Technopolis recommended the third option13 and proposed that NESTA move forward with work to pilot a Public Sector Innovation Scoreboard based on a voluntary survey, using the Community Innovation Survey as a starting point, with a view to its being adopted by DIUS within two or three iterations as a complement to its Annual Innovation Report.
The conceptual framework should be applicable to:
Different departmental / agency missions (e.g. policy, regulation, service delivery, etc)
Diversity of operational domains (education, health, migration, etc)
All broad classes of innovation (organisational, process, service)
Capture the extent of innovation outcomes as well as inputs / outputs
The proposed framework is shown below:
Existence of an innovation strategy
Existence of an innovation monitoring and reporting system
Annual expenditure on innovation activity (e.g. R&D expenditure)
Employment of people involved in innovation (e.g. % of scientists and engineers)
Implement the first survey, directed to 10 departments and NDPBs (contractor) (2009)
Prepare departmental submissions (individual departments and NDPBs)
Produce indicative scoreboard and capture lessons (contractor)
Evolve survey design ready for the second iteration of the pilot (DIUS, SG, contractor) (2010)
Phase 2 would consist of combining the survey with the Annual Innovation Report for year three, switching from voluntary to mandatory completion and from part to full coverage (with necessary exceptions).
Project 2 – LSE Public Policy Group
This paper14 discussed the distinctive characteristics of, and the key influences on, innovation in the public sector.
The index should be founded on a broad base of different indicators, which is likely to capture more of the complexity and diversity of public sector innovation than one with only a few elements.
A two-phase process for developing the index is needed. Phase 1 would generate the full set of indicators. This will require a fair amount of original research to see what data can be reliably and regularly assembled bearing on the themes set out above. Phase 2 would involve testing using multivariate regression and other techniques to see which variables are most effective in capturing different dimensions economically.
The index should be based upon a mix of indicators, some objective or non-reactive and others survey-based.
The index needs to cover the pre-conditions and main factors conditioning innovation, innovation inputs, innovation outputs and (if possible) innovation outcomes.
Government organisations need to be assessed against similar organisations.
The main components and structure of the Index proposed by LSE include ten categories or dimensions intended to capture innovation inputs, enablers and impediments, outputs and outcomes. Three dimensions refer to inputs (R&D Activities; Consultancy and Strategic Alliances; Intangible Assets); four refer to enablers, impediments and adoption elements (ICT Infrastructure; E-Government and Online Services; Origins of Innovations; and Human Resources); two refer to outputs (Institutional Performance; Innovation Outputs), and one refers to the impacts of innovation (Impacts and Scope).
The proposed index in Phase 1 has 54 sub-indicators categorised under ten broader areas of innovation measurement. Each indicator is scored according to the information collected, either through a specific innovation survey; and/or using publicly available quantitative information.
To enable the differently denominated sub-indices to be added easily, the scores on each component are standardised using the formula:
In = (Xn –Xmin) / (Xmax – Xmin)
where In is the value on a range from 0 to 1 for that indicator; Xn is the score for that particular organisation on this measure, Xmin is the minimum score in the comparator group including X, and Xmax is the maximum score in the comparator group. Organisations obtaining a score closer to 1 are the most innovative ones, whereas those closer to 0 or under 0.5 are the least innovative ones.
The index has been developed to cover all government organisations. Its broad conception of innovation (running along ten different dimensions) is designed to ensure that no individual government organisations are advantaged or disadvantaged in the scoring.
Ten dimension structure: Index elements cover the complexity and diversity of government innovation. They also include inputs, enablers, impediments, diffusion elements, outputs and impact (outcomes).
Based on multiple elements which greatly enhance robustness. With 54 proposed components in phase 1, no one element greatly influences the Index. In phase 2 data reduction will still focus on maintaining robustness.
Mix of objective and survey data: allows for cross-checking of survey responses and greatly enhances reliability. Most objective data will be already-published statistics and measures.
An exploratory Phase 1 is needed: it is not feasible at this stage to definitively establish what measures work well in capturing innovativeness.
Mix of objective and survey data: published data is not always easily available and comparable across years and sectors. Some approximations will be needed.
Some index elements may not be applicable to all government organisations: e.g. filing patents is not relevant to some bodies.
Some index elements may not be available for all organisations: although we are also adept at finding and in-filling with proxies.
Survey returns may be incomplete or sent back in public relations mode: an inherent limit of questionnaire-based, reactive methods. However, good design and cross-checking can minimize this problem.
Index applicable to all government sectors and organisations: allows for comparing how innovative different parts of government are. The approach can be extended to assess main organisations individually if appropriate.
Achieving an index that endures is feasible in the two phase process, allowing us to focus on data that is both essential and replicable over time and across parts of the public sector.
Developing an index that can work in other countries is important because it could affect the availability of comparator information. Having a strong theory rationale helps here.
Developing stakeholder acceptance and involvement across phases 1 and 2 will enhance the credibility and acceptance of the Index.
Data collection could be protracted: mitigated by PPG experience in public sector data collection. We have access to most of the information needed and can work to tight deadlines in Phase 1. No original research will be undertaken. The approach could also be piloted in one sector. The most difficult sector is central government where data is less available, but we have a long track record of innovative information compilation here.
Unfairly comparing unlike government organisations: mitigated by the structure of multiple elements considered in phase 1 and the core design of the index in which performance is chiefly relativised to a relevant comparison group. Many central government organisations are one-offs (e.g. there is only one MOD or HMRC within the UK). So comparisons are most difficult here. But again we have extensive experience of how to compare at this level. We can build additional assurance by bringing in ranking with overseas comparators, perhaps using a mix of objective indicators and expert judge panels.