Understanding Telecentre evaluation frameworks through the Venezuelan Infocentros programme abstract



Download 319.92 Kb.
Page2/8
Date conversion15.05.2016
Size319.92 Kb.
1   2   3   4   5   6   7   8


Literature Review

When considering evaluation, there is a multiplicity of concepts and terms included in the literature. Key evaluation terms have more than one meaning or terminology that are applied to the same concept. Notwithstanding this diversity, the adverbs, ‘complex’, ‘difficult’, ‘political’, ‘social’, ‘subjective’ and ‘challenging’ are commonly included in evaluation definitions. (Cummings, 2001; Mbabu, 2001; Hirschheim et al., 1998; Saunders, 2000; Mabry, 2002)


Identified as a serious constraint to the development of a coherent body of evaluation literature and an obstacle when evaluation experiences are shared with learning purposes, this variety of concepts and terminology relating to evaluation have been classified and clarified by some studies (Cummings, 2001; Khadar, 2001). For example, Cummings argues that terms such as ‘evaluation framework’, ‘conceptual framework’ and ‘logical framework’ are used interchangeably. Khadar states that the words ‘evaluation’, ‘impact’ and ‘performance’ are defined and used in different ways by academics and practitioners.
Other studies attempting to organise the evaluation literature have tried to classify or to make an inventory of evaluation frameworks and approaches according to the specific object that is going to be evaluated. (Baruah, 1998; Hirschheim et al., 1998; Cummings, 2001) Hirschheim and Smithson, for example, categorise Information Systems evaluation literature; Baruah offers a classification of conceptual frameworks used in the development context.


IS Evaluation


As in many other disciplines, various studies have been carried out in the field of Information Systems (IS) evaluation. Hirschheim and Smithson (1998), for example, developed a framework to classify IS evaluation literature based on the fundamental assumptions of different evaluation approaches. The framework comprises three zones and classifies the literature into a continuum from highly rational / objective (analytic perspective) to political / subjective (interpretivist perspective). The efficiency zone, at one end of the scale, includes approaches that are based on logical and objective assumptions with emphasis on efficiency – ‘doing things right’. (Hirschheim et. al, 1998) Approaches in this zone measure performance/quality against a point of reference or given yardstick (Swanson et al, 1991; Grady, 1993).
Effectiveness, in the middle zone, concerns ‘doing the right things’ and mainly measures risks, management, user satisfaction and usability. There is a plethora of effectiveness techniques for evaluation of IS, for instance, Return of Investment evaluation (Brealy and Myers, 1991) or Cost Benefit Analysis (King and Schrems, 1978), two of the most widely applied in formal evaluations.
Although Efficiency and Effectiveness approaches have been extensively adopted in practice, many weaknesses have been identified in these evaluations. For instance, Effectiveness techniques are unable to capture the intangible and qualitative benefits that IS can bring (Farbey et al, 1992) and to establish monetary value of or to measure every element in the cost-benefit analysis is not easy or objective. Similarly, techniques evaluating usability (Tyldesley, 1988) are criticised because although usability criteria can be quantified there is subjectivity regarding the measure in use and its applicability (Hirschheim et. al, 1998).
The understanding zone, at the other end of the scale, tries to develop an understanding of evaluation. Recognising that evaluation is problematic, the interpretivist perspective does not focus on measurement per se, but provides an explanation of the evaluation processes taking into account subjective, political and social factors (Angell and Smithson, 1991). Studies on Understanding or Interpretivist approaches are increasingly found in the literature. Using phenomenology, hermeneutics, contextual and processual elements, philosophy and other social sciences, academics have proposed alternative views of evaluation problems.
Based on Pettigrew’s ‘contextualism’ (1985), approaches that employ a context, content, and process framework are part of a growing body of interpretivist studies in evaluation (Symons, 1991; Walsham, 1993; Serafeimidis et al, 1996a; Serafeimidis et al, 1996b; Serafeimidis et al, 2000;). Serafeimidis and Smithson’s framework (1996a, 1996b), in particular, proposes a multi-perspective approach that consists of three interrelated layers – context, content, process – taking into account organisational values, potential outcomes, associated risks and social structures.
Using phenomenology and considering issues of power, Introna and Whittaker (2002) offer an alternative to the objective/subjective dualism presented in Hirschheim and Smithson’s classification. They argue that evaluation is also a political process and that immersed in an organisation constituted through power asymmetries, evaluation is an ongoing conversation without a specific start or end which reflects a unity of action and cognition. Based on Soft System Methodology (Checkland, 1981), Avgerou (1995) proposes a framework for evaluating IS that considers negotiation and consultation and whose purpose is to find whether the system can be implemented within the organisation’s culture and whether the specific benefits that were specified are achievable. In addition to these interpretivist views of evaluation, Land (1999) offers a modified version of Kaplan and Norton’s (1992) balanced scorecard, which using a socio-technical approach, adding the perspective of employee.
Despite this variety of studies offering alternatives for understanding the problems of evaluation, these approaches are not used in practice. It has been claimed that they are not implementable, that some of them have implications for practice and that they are time consuming and expensive. (Symons, 1991; Canevet, 1996; Walsham, 1999; Hirschheim et al, 1998) As Hirschheim and Smithson have argued “…while progress has been made by some academics in utilising interpretive approaches, there is a little evidence of its acceptance by practitioners”.

Evaluating Development


Development projects have been implemented with a particular idea of the meaning of development. Over the past few years, development definitions have changed and this has been demonstrated in the focus used when designing policies. Definitions first considered development in terms of economic growth, however in recent years, development has been seen from a more integral perspective in which a variety of elements are taken into account such as poverty alleviation, health, education, well-being, and the building of a democratic society. The theory or definitions of development that motivate programmes and projects will have implications for their evaluations.

Frameworks used in a development context have different origins and have been classified into six general types: Domain-based (using key dimensions of sustainability), Causal (i.e. causal relationship between health and pollution reduction), Sectoral (focuses on health, education, information, etc.), Goal-based (focuses on social well-being, basic needs, prosperity), Issue based (based on popular trends i.e. ICTs) and Combination frameworks that combine the above. (Baruah ,1998) All large development organisations (i.e. The World Bank, UNESCO) have their own methodology for evaluating development programmes. In the particular case of ICTs for development projects, various frameworks have been developed to evaluate specific projects including e-government, tele-medicine, and telecentre initiatives (Menou, 1993; Correa et al, 1997; Saunders, 2000; Mook, 2001, Madon, 2003).



Notwithstanding the existence of these frameworks, there is debate in the field concerning the criteria used when evaluating development interventions. It has been argued that evaluation exercises must consider the non-deterministic and situated nature of ICTs (Avgerou, 2002). Contrary to techno-optimistic approaches (Schech, 2002), when evaluating development, ICT innovation must be viewed as a situated socio-technical processes and the context encompassing these processes must be considered (Avgerou, 2002). The analysis of technological outcomes and changed work practices is not enough, thus the scope of the analysis has to consider changes in ways of thinking and attitudes of participants (Walsham, et al, 2000).
Generally classified as direct, indirect and induced, impacts are defined as the outcomes or effects of a process or an interaction, thus impact studies attempt to demonstrated the changes between an initial situation and a new one (Menou, 1993, 1999; Menou et al, 2001; Michiels and Crowder, 2001) Although they have been critiqued by their limited interpretations of social well being, there is a proliferation of participatory approaches to measure developmental impacts of ICT interventions (Chambers, 1995). These approaches attempt to capture evidence to establish indirect impacts – the most difficult to identify – however, there is a lack of robust methodologies to obtain this information (Madon, 2003). Consequently, recent literature presents frameworks which examine developmental impacts of ICTs using Sen’s work (1999) on capabilities and entitlements (Madon, 2003, Gigler, 2004)

Telecentres Evaluation


Similarly to evaluation literature on Information Systems, there have been considerable efforts in recent years to develop frameworks which consider the role of ICTs, particularly in terms of telecentres. From the publication of Hudson’s work in 1984 (Hudson, 1984), there has been a growing interest in research into evaluating efficacy and impacts of telecentres. Telecentre literature offers a variety of approaches and frameworks for assessment, such as the Lanfranco Framework (Lanfranco, 1997), the International Telecommunications Union (ITU) (Ernberg, 1998a, 1998b), and the Acacia initiative of IDRC (Whyte, 1999a). Ernberg (1998a, 1998b), for instance, has developed a research framework for evaluating ITU’s telecentres, which though not been carried out thoroughly, provides an extensive and helpful questionnaire profile. Whyte (1999a) produces an interesting review of evaluation techniques, suggesting indicators for sustainability, performance, application of information, and social and economic impacts. Furthermore, a variety of stakeholders are considered, and basic research questions, reporting, analysis, research methods, and indicators for demand of services are recommended.
Measuring a combination of social development impact and economic sustainability of telecentres, Goussal (1998) suggests social development indicators that evaluations of impact should consider, including social characteristics such as demographic statistics, quality of life and social services. Detailed economic models are offered in this model and it is suggested that evaluation should start from impact-driven bottom-up criteria, recognising the need for economic sustainability but not disregarding poorer communities.
A compilation of IDRC’s studies (Gómez and Hunt, 1999a) offers comprehensive frameworks for evaluating telecentres. In these studies, templates for using a multi-dimensional research approach and diverse approaches to evaluating telecentres are provided. Hudson’s study (1999) for example, looks deeply into developing research for telecentre evaluation. Scharffenberger (1999) proposes a telecentre evaluation methodology and survey instruments and it is particularly helpful in noting problems encountered when implementing evaluations. Whyte (1999b) offers an approach to evaluation of telecentres. In general, aspects such as how to plan an evaluation, methods for conducting research, how to design initial research questions, categories that define telecentres, and telecentre indicators, are discussed in these studies.
In the same collection, Harris (1999) states that identifying “output measures” relates to the additional benefits that result from utilising of telecentre services by the community and that they focus on establishing the impact on the community accessing new information sources. He believes that output measures are more important than technical measures because they “provide the acid test for telecentre evaluation”(Harris, 1999). Holmes (1999) provides a gender analysis of telecentre evaluation and offers an examination of issues that affect women’s access to ICTs. How gender can be meaningfully integrated into evaluation is addressed, with aspects of men and women working as telecentre managers and operators, similar experiences across developing countries, and women being served by the telecentres also analysed. Further research in gender evaluation methodologies (GEM, 2003) has presented numerous methodologies such as storytelling to engage telecentre users and community members. It is argued that storytelling, by itself and combined with other tools, is a “potent way to evaluate projects”.
Addressing the weaknesses of the literature in terms of case studies showing how these methods are applied from start to finish, and that earlier studies were isolated from regions’ context that may affect telecentre development, which makes difficult regional comparisons, Reilly and Gómez (2001) offer a study comparing evaluation approaches in Asia and Latin America. Drawing on IDRC’s evaluation experiences in these regions, and using the “guiding principles for sound Telecentre Evaluation” (Gómez and Hunt, 1999a) as a framework, regional and institutional differences were observed when evaluating telecentres. In addition to regional comparisons, the proposed framework intends to determine the usefulness, the responsibility in financial terms, and the abilities to build local capacity and to share lessons of evaluation.
With a particular emphasis on private sector participation in telecentres, the National Telephone Cooperative Association (NTCA) offers an interesting approach for impact assessment (NTCA, 2000). The methodology differentiates the types of impacts – direct, indirect and induced, and impacts over time. Recognising that few methodologies have considered indirect and induced telecentre impacts, and that impacts and influences change over time, the methodology distinguishes between ‘pre-start-up’, ‘early impacts’ (1-2 years), ‘mid-life’ (3-4 years), and ‘late-life’ (5-7 years) phases when evaluating telecentres.
Studies in the case of Venezuelan telecentre evaluation practices are scarce in the literature (Urribarrí, 2003). Urribarrí’s work, for instance, offers a critique of the role of Venezuelan ICT policies, the Infocentros project, and development objectives. Moreover, when Venezuelan activities have been assessed, results have been expressed in terms of rankings or global indexes (UN/ASPA, 2002; CAVECOM, 2003; Finquelievich, 2003; West, 2003). The dimensions included in these studies assessed technical aspects such as number of applications, internet users, policies and infrastructure, rather than ICTs’ impacts on Venezuela’s situation. Similarly, other studies have mentioned only Venezuelan telecentre practices without offering any critical analysis of its evaluation practices. (Gómez, et al, 1999b)
Particular cases of Latin American countries’ evaluation experiences are presented in the literature (Baron, 1999; Delgadillo et al, 1999; Herrera, 1999; Robinson, 1999, San Sebastian, 1999). One could argue that the challenges of and solutions to Venezuela’s telecentre evaluation are similar to those of other countries in the region, however, and although there are similarities in language, history, political and socio-economic situation and culture, a specific context examination is required.

1   2   3   4   5   6   7   8


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page