The Life Sciences, Biosecurity, and Dual-Use Research: Further Details on a Proposed Method for Engaging with Scientists Dr. Brian Rappert



Download 135.05 Kb.
Page2/3
Date conversion21.02.2016
Size135.05 Kb.
1   2   3

Seminar Design
Section two and three argued that the recent turn to dual use issues in the life sciences could be productively considered through a strategy that sought to both collect data about practising researchers’ assessments and to engage them in an educational process. As suggested in section four, with the attention given to exploring individual’s understandings and concerns as well as deliberate interaction among peers, the focus group method provides at least a starting basis for such undertakings. Yet, since the term ‘focus group’ encompasses a wide range of activities and agendas, crucial but often neglected questions exist about the basis for questioning in them. In addition, their educational potential remains relatively underdeveloped. Given these considerations, important planning choices must be made.
The section discusses the initial design of 26 seminars conducted in the UK with practicing life scientist during the academic year 2004-5. As previously mentioned, so far in 2005-6 various seminars were held in the Netherlands (4), in the US (12), in Finland (2), and South Africa (7). Because the project moving the seminars beyond the UK is still ongoing and what sessions have been held quite recently, this paper does not address them. In what follows only the UK seminars will be discussed. While not wishing to present the work undertaken as a panacea, it considers how the aims of exploration and education can be achieved through a relatively low cost adapted form of the focus group method. The next two sections do this by first briefly considering the themes of the discussions and then providing more detailed consideration of the rationale and benefits for the questioning undertaken.
In the range of all those in industry, government departments, and educational institutions who undertake life science research, our study took as its population those in university life science departments; this including university faulty, technical support staff, and postgraduate students. There were a variety of reasons for this purposive sampling: one, many of the novel dual use controls being proposed are primarily designed for civilian research outside of government, military or corporate laboratories, the latter grouping which overall is much more accustomed to institution-specific restrictions on the conduct and communication of research than universities. Two, as British university research is already subject to numerous biosafety regulations and research protocols, participants would have given thought to general issues of governance. Three, universities are relatively open institutions (e.g., in comparison to industry) that have a tradition of facilitating discussion about societal issues. This point, however, should not be taken to imply that universities are devoid of tensions in undertaking inquiry. Karren (1997) argues that the interaction fostered through departmental ‘colloquium’ need to be seen as contending with a number of competing demands associated with expertise and intellectual debate. As she argued on the basis of an empirical study, there was often a:
need to avoid overly heated and hostile exchanges while ensuring boring discussions were not tacitly promoted; to create an appropriately playful/serious environment that did not tilt to far in either direction; to make certain that the discussion became neither a social chitchat nor a lecture from a knowledgeable to ignorants and to reconcile the contradictory injunctions about how experience/status difference should be managed (ibid., 134).
So while university seminars are notionally about the status of ideas, it is wrong to see them as devoid of social or personality considerations that might structure inquiry.
University staff already have extensive demands on their time. In the UK, for instance, by some measures this occupation undertakes one of the highest rates of unpaid overtime (TUC, 2004). This situation makes scheduling group (or any other) interview sessions difficult. Initially, the seminars were intended to be convened in the evening with the help of the regional offices of the Institute of Biology, a professional body representing biologists in the UK. This, however, proved laborious and ultimately unsuccessful. Instead, the seminars were offered as part of university departmental seminar series. 76 universities with active biology research seminar series were approached. 26 seminars were held in total (two being pilots) involving 624 participants and lasting between one and two hours: 13 in England (excluding Greater London); 6 with universities in Greater London, 3 in Scotland, 2 in Wales, 1 in Northern Ireland, and 1 in Germany for purpose of testing out any major comparative differences in responses.
Using pre-existing university seminar series provided a number of practical benefits: the room and equipment was already arranged; no monetary compensation was required as in typical focus groups (and therefore its impact on the discussion was not relevant); because in many British universities staff and postgraduate students are expected to attend the seminars this proved a relatively straightforward way to secure audiences with varied profiles who were also relatively at ease with the setting location; and the expectation for attendance meant additional time demands were not imposed on participants. As a relatively minor negative, the lack of control over the specific venue location meant the quality of audio recordings suffered.
Assessing the types of interactions fostered through the use of pre-existing groups is complicated. Here benefits mixed with negatives, a situation which suggests the importance of attending to the implications of the choices made in the research design. Since many of the issues discussed related to how particular institutions might govern research, conducting discussions within existing department groups was prudent. Yet, the acquaintance of participants also threatened to produce conformity to the views expressed by those in hierarchical positions or to result in discussions fractured along established divisions (see below). Also, university departments in the UK differ considerably in terms of their size and composition. The number of people participating ranged from 5 to 75 with an average of 24. No systematic differences were notable in the ease of initiating and carrying on discussions due to audience size. While this average size enabled many people to be involved in the seminars, it was also significantly higher than typical focus groups. As such the seminars had to trade-off between the space it enabled for individual respondents and the breadth of those reached. In the end, the lack of familiarity of attendees with dual use issues (see below) and therefore the typical exploratory quality of the discussions fitted relative large groups.
The seminars typically began with self introductions of Dando and myself, a brief statement about the topic of dual use research and the importance of initiating discussion about this by practicing researchers, and a request for permission to make anonymous audio recordings of the session. In terms of their composition, the seminar was not simply a presentation with a question and answer period at the end. Rather it consisted of a series of slides with information regarding the future threats posed by biological weapons, the relation between current biomedical and bioscientific research and new weapons possibilities, and the range of national and international measures currently being implemented or proposed. Discussions were initiated through questions posed after speaking to the information on the slides.
The seminars differed in important respects from common prescriptions for focus groups. As focus groups typically try to ‘tap’ individuals’ experiences or preferences, the advice is often given to start with general, bland, and non-challenging questions that can ‘loosen up’ participants for more substantive questioning. However, given our initial presumptions (later confirmed) about the lack of consideration or even awareness of dual use issues among practicing researchers (see below), operating in this manner both had less justification and risked losing the attention of participants. Instead, after the introduction, one of the controversial dual specific cases was described and the question asked of what should be done (i.e., either the interleukin-4 mousepox experiment that inadvertently suggested a way to manipulate smallpox and the question of whether it should have been published or the artificial chemical synthesis of poliovirus and the question of whether it should have been conducted in the first place). An early example of the sequence of slides and key questions is shown in Box 1.
Box 1: Slide Titles and Questions in an Early Seminar

1. Title slide for ‘The Life Science, Biosecurity, and Dual-Use Research’ seminars


2. What are we doing?

An explanation of the scope and goal of our research and seminars


3. Cause for Concern?: Synthetic Polio Virus

Question: Should it have been done?


4. Cause for Concern?

Slide detailing recent advances in synthesizing capabilities

Question: Is artificial synthesis still a good idea?
5. Mousepox Experiment

Question: Should such experimental results have been widely circulated?
6. The British Reserve

Slide suggesting an example of suppressing the implications of research

Question: What options are there for the publication of research?
7. US Fink Committee

Slide detailing proposed US system for the oversight of research

Question: Would such a system be helpful or dangerous?
8. Spanish Flu: What Should be Done?

Slide detailing efforts to recreate the deadly 1918 Spanish Flu

Question: Are there any limits on what should be done or how it is

communicated?


9. Codes of Conduct

Background information about British and international codes activities

Question: What individual and collective responsibilities should be included?
10. Thanks and contact information

The rationale for the information and questions posed is a matter of considerable importance, especially because of the educational aim of the seminars. These issues are considered in some detail in section six. For now, it is worth initially noting two further differences between the conduct of seminars and that common in focus groups. One, the seminars were transformative: this in the sense that many of the questions and their order altered over time. Both because of the aim to initiate discussion and reflection as well as the lack of understanding about what researchers thought about dual use issues, it was necessary to reappraise what we asked and how. So, while in each questions were asked whether there should be any limits on what research was done vis-à-vis dual concerns, whether it would be sensible to restrict the communication of ‘dual use’ results, or whether systems of research oversight were prudent, the seminars differed in the ordering of questions, the other questions posed, and the follow up probes used.


Two, and as a related point, the number of seminars conducted went beyond typical prescriptions. For instance, Morgan (1998, 81) advocates that if ‘the discussions reach saturation and become repetitive after two or three groups, there is little to be gained by doing more’ sessions and furthermore that if one ‘can clearly anticipate what will be said in the next group then the research is done’. Instead of taking this approach which is indebted to thinking about research as a process of elucidating information, the emergence of common themes was treated as a way to generate further examination of our and their presumptions and inferences.
A Thumbnail Sketch of Responses
Following on these design considerations, this section briefly considers the main themes of the seminars, though an extended examination is beyond the scope of this paper. Rather the intent is to discuss pervasive themes and how they factored into choices made about the conduct of the seminars developed in the next section.
Interactive group discussions are not straightforward to analyse. Their interactive dimension means that the discussion can evolve along unique lines in particular seminars. Their group dimension means that the statements made should not be treated as merely an aggregation of one-to-one interviews. As noted above, there is reason to think individual responses offered in (existing) peer groups are likely to differ in some respects from those given in one-to-one settings. Crucially though, this does not thereby imply the latter should be regarded as more authentic by some metric (Morgan, 1993). As has been argued, group interview settings can both produce conformity and encourage openness (Kitzinger, 1994). Each method of research should be scrutinized in terms of its underlying assumptions and the trade-offs in the commitments made. As argued above, since what was needed in the case of dual use life science research was an exploratory process of peer engagement to enable the formation of standpoints, group session methods had definite overall advantages.
In addition to these widely recognized considerations though, questions can be asked about the analytical status of the responses given. Morgan (1998, p. 25), as with many others, maintains that focus groups are a way getting closer to ‘participants’ experiences and perspectives’. Yet, much of the recent work in social science regarding the discursive status of accounts would counsel against extracting statements made in some particular form of interaction as simply representing individuals’ attitudes (e.g., Edwards, 1997; Silverman, 2004). Taking this orientation forward in the study of environmental risks, for instance, Waterton and Wynne (1999) critique the idea that attitudes should be regarded as stable, coherent, and unambiguous entities that can be tapped through surveys or interviews. Instead, attitudes expressed are done so ‘(a) in relation to their relevant social context…(b) interactively – that is, they actively form attitudes though the opportunity of discussing issues that are not often addressed;…and (c) as a process of negotiation of trust between themselves as participants and…researchers’ (ibid., p. 127). In the case of risks assessments, that might mean that the accounts (be they as part of surveys, one-to-one interviews, or group interviews) offered relate to matters such as: the historical context for consideration, the sequence of what questions and responses have already been made, the perceived uses of the research, trust in institutions that control risks and pose questions, and the sense of agency of respondents. A general implication of this and related studies is the inappropriateness of treating responses made about complex topics as discrete entities that should be added together to provide a summation of individuals’ ‘attitudes’. Again, the upshot of such assessments is not to condemn all methods of social research, but rather to attend to the underlying assumptions of each.
In light of such discussions, the analytical orientation to participants’ responses could be a topic of detailed and prolonged reflection. It is not an aim here to provide an exhaustive account of the interactive dimensions of the seminars undertaken. Just as the choice between competing research methods demands consideration of the purpose of the research and the problems being addressed, so too does the choice in what sort of analysis is provided. As the central purpose of this is paper is to suggest a strategy for engaging with scientists in emerging area of public concern, the remainder of this section provides a broad, albeit sketchy, overview of the dominant themes in the seminars which then sets up a discussion in the next section about how we questioned participants in response.4
In this regard, two overall themes are worth noting. First, very few participants indicated giving previous consideration to the dual use potential of life science research. While this was not completely unexpected given the interviews conducted in 2003 noted in section three, the extent of the absence was surprising. We had presumed at least many would be aware that there has been continuing international debate about the security dimensions of the findings and techniques of advanced research, but this proved mistaken. As a result of the apparent low level of engagement with dual use issues expressed in the first few seminars, prior to discussing the case of the experiment with IL-4 in mousepox, we began asking how many participants had even heard of it. Reported levels of awareness of more than 10 per cent were extremely unusual.
Second, despite important differences, it is possible to identify broad themes of commonality. As mentioned above, while changes were made in the content of the slides throughout the research process, we devised information and slides for all the seminars that broadly addressed three key questions in current policy debates: Are there experiments or lines of research that should not be done? Is some research better left unpublished or otherwise restricted in dissemination? Are the envisioned systems of pre-project research oversight strategies sensible?
To the question ‘Are there experiments that should not be done?’, the vast majority of responses given supported undertaking the ‘contentious’ experiments cited, and did most often by stating that the results obtained through them were in some sense inevitable. Herein, the question of whether something should be done missed the point that it would be done (in the end) by someone. There were variations on the general theme of inevitability, with some saying that the knowledge necessary for malign applications was already out there and so restricting further research would be useless, others that efforts to restrict research in only certain locations (e.g., universities, the West) would be ineffective, still others that attempts to somehow limit particular experiments would be futile because the underlying knowledge in the field could indicate directions for novel malign applications. Those that did question the advisability of undertaking some research tended to be (as far as we could tell) students.
The advisability of restricting publications was overwhelmingly doubted; reasons for this included the importance of communication in countering the deliberate and natural spread of disease, the limitations of the details in articles to enable the replication of research, and the status of publications as just one way researchers share information. Further, strong scepticism was expressed about the advisability of an enforceable, binding biosecurity oversight system for such reasons as the difficulties of weighing costs and benefits, the ease for those with malevolent intent to circumvent controls, as well as the amount of existing regulations. Elsewhere (Dando and Rappert, 2005), such overall themes were marshalled to contrast two Weberian ideal types, that of ‘security-conscious’ and ‘classic open science’ respondents, and to then argue that the latter is much more typical heuristic type.
Questions of Engagement
With the emerging understanding of the prominent responses, ever present choices had to be made about the proper course of further questioning. Just as when one moves beyond abstract statements about the need for education about dual use issues to consider what in particular should be done then the issues at hand become much more complicated; when one moves beyond statements about the potential for focus group-type methods to explore people’s experiences in their own vocabulary then difficult issues must be addressed about what exactly should be done. This section discusses the broad outline of the strategy of questioning employed and what it enabled by way of data collection and educational engagement.
As suggested above then, with each general research method there is a need to attend to the types of interactions fostered and the strengths/weaknesses of each approach. In the case of undertaking group-type interviews through ‘focus groups’ about dual use issues as part of university departmental series, that means recognizing the potential for group conformity, the possible threatening quality of questions, the scope for individuals to profess rationalized views, and the prospect for the internal dynamics of university seminars to constrain discussion. Against such concerns, the seminars conducted here did not merely seek to elicit responses. Instead in their content and conduct they sought to make the data, assumptions, and inferences underlying responses explicit and to then openly test them.
This basic orientation was inspired from the substantial work of Chris Argyris and colleagues (e.g., Argyris, 2003; Argyris and Schön, 1996; Argyris et al., 1985) who have sought to devise forms of interaction that promote mutual learning. As Argyris has argued, despite widely professed commitments, many organizations and inter-personal relations are characterized by features that discourage inquiry and learning. This includes the presence of covert attributions of motives, the treatment of one’s own views as obvious and correct, and the use of unsupported evaluations. The result is often personal defensiveness in questioning and the (re)production of invalid assessments and inferences. To counter this, Argyris advocates the seemingly simple suggestion of making data, inferences, assessments, and private attributions explicit and to treat these as disconfirmable through public testing. So the prescription is to challenge any assessments, but in a way that fosters further inquiry into their basis. His analysis though offers not just a critique of many types of social interactions, but also forms of social research which strive to mimic artificial experimental conditions in the physical sciences. Instead, he advocates undertaking research which through iterative processes of action and change enables the greatest reflection on the substantive concerns of individuals and the rules of inquiry.
In terms of the seminars then, whether the responses offered were given out of concerns about group acceptability, personal antagonisms, or other motivating factors, an upshot of Argyris’ work is the importance of encouraging a questioning of the justifications for statements. In other words, the concern is not so much with whether responses are by some metric authentic or biased, but rather treating accounts on their own right (whatever the situational, interpersonal, or other factors impinging on them) and finding ways of testing the basis for whatever is said in the service of promoting mutual understanding and further reflection.
How this can be achieved in practice is a topic in need of elaboration. We strove to adopt, if not always in practice realized, a fairly formulaic method for responding to answers given to the questions posed:

1. Restate what said;

2. State what we understand this to mean;

3. Any evaluation/commentaries/inferences we draw from the statement (i.e., what do we take respondents to mean or the implication of what they say);

4. Put a question back to them if what we said was accurate.

In this way the effort was made to acknowledge individuals’ responses, to test the ‘ladder of inference’ (ibid.) underlying assessments, to make those a matter of further discussion and, through doing these actions, to illustrate our commitment to further inquiry. This then set up a basis for others in the audience to agree with or challenge the data, inferences, and assessments (or their absence) offered by others. In this manner, we sought to move beyond a soliciting of views to an examination of reasoning. The interaction between participants was essential in moving the locus of questioning and the burden of substantiating positions away from us as facilitators to them as participants.


Trying to promote interaction in this manner though should not be understood as a straightforward exercise. Achieving the sort of openness to inquiry sought was a skilful task where learning was required on our part. As well, the negotiation of expertise was a constant theme. While participants were scientific experts in their particular fields, we as presenters were knowledgeable about policy debates that few others were even aware of and we as individuals had an obvious interest in raising this topic in the first place. Thus, whatever the importance of being non-judgemental, as presenters we could hardly pretend not to be experts (as is often suggested in facilitating focus groups, see Kitzinger and Barbour 1999; Morgan 1998) about the dual use issues being discussed. But rather than use that expertise to close off debate by proposing definitive facts and assessments, when asked regarding our assessments of situations, the efforts was made to substantiate assessments in such a way as to make our reasoning explicit and to put those views up for a public test.
Of course, the strategy as outlined so far of shifting the locus of questioning was dependent on participants actively forwarding accounts and doing so in a manner where enough diversity was expressed to enable further peer-to-peer consideration of the data and inferences supporting evaluations. With the additional factor of the relative lack of consideration of participants of dual use issues in the past, realizing the participant-participant dialogue could hardly be presumed. Therefore we sought to structure the seminars such that within particular sessions we could question the basis for previously stated evaluations by revising the seminars between sessions. In an effort to understand the basis for evaluations made about the biosecurity issues posed, the seminar’s content was altered so as to test out participants’ statements.
To elaborate, while in each seminar questions were asked regarding what research was done, how it was published, or whether systems of research oversight were prudent, the sequencing of such slides and the content of the other slides evolved over time with the intent of enabling further questioning of stated evaluations. Consider this strategy as it related to the theme of inevitability. As elaborated previously, claims about the inevitability of scientific development loomed large in many justifications for downplaying or dismissing questions about whether certain experiments should not be conducted on biosecurity grounds, whether the scientific papers should be modified or even not published in light of such concerns, or whether viable security-related systems of research oversight could be established. Herein, the question of whether some line of work should be done missed the point that it would be done (in the end) by someone; which in practice would further mean that those ‘sufficiently skilled’ would know about it. In this sense then, any limitations or controls would be futile.
The frequency with which such responses were offered was somewhat unexpected for us. Many of our initial slides and prepared questions were designed to test for the boundary where participants might start expressing concerns. So, we included a slide about the artificial synthesis of polio virus (which we expected few researchers would say should not have been done) and then followed it up by a slide indicating the substantial pace with which synthesising capabilities have moved ahead since to see if this gave any reasons for pause. As well, the current effort to recreate the 1918 Spanish Flu was used as an ‘extreme case’ for asking if there were any limits to what should be done or communicated. Yet, because science was so often presented as more or less inevitable, these sorts of considerations or cases were deemed inconsequential.
As a result of such interactions in the first several seminars, we ended up combining the initial polio virus slide with the one giving details about the pace of development and dropped the case of the Spanish Flu altogether. We then had to consider how to better understand and probe characterizations of inevitability from there. A modified slide was introduced in subsequent seminars that detailed the multi-billion expansion of biodefence programs in the US. We had hoped by bringing to the fore the contingent policy choices made about what gets funded in the life sciences (and thus what science gets done), this would encourage some participants to openly query claims made by others about inevitability.
When this failed to happen we then introduced a slide summarising themes of earlier ones, in which we explicitly challenged notions about inevitability by comparing the limited funds dedicated to many tropical diseases against those recently made available for pathogenic agents. This and other summarizing themes were put back to participants in the spirit of publicly testing out views. However, presenting such multiple and controversial points in this manner rarely resulted in much discussion, in fact it tended to stop whatever dialogue had been fostered up to that point. Starting from seminar 15, we varied this by discussing some the main dilemmas identified in previous seminars relating to inevitability in a more removed and formal neutral manner. However, again, this proved a conversation stopper.
In response, we then varied the way in which we questioned statements about inevitability by first being sure to carefully probe for the assumptions underlining such statements when initially made and second by then challenging those accounts through probes whenever a consideration pertinent them was later brought up (e.g., in relation to the funding of research). Embedding our queries in this way generated much more discussion about whether the development of science is ‘inevitable’.
In a similar manner we also sought to question other related presumptions. Assessments of inevitability typically relied on the assumption that once research was conducted, it would then automatically become known by others with suitable expert in the field – in other words, as we repeatedly heard, once knowledge was generated ‘the genie was out of the bottle’. Probing for the reasons why the dissemination of research was unavoidable indicated a number of issues such as the pressures placed on academics to publish and the advent of Internet publishing which meant vast amounts of resources were easily available. Yet, such statements existed in an uneasy relationship with another claim often made that the publication of some contentious research posed little danger because of difficulty of replicating results from the necessarily limited information given in formalized articles. With our growing understanding of responses, when such contrasting assessments were offered over the course of one seminar, this provided an occasion for encouraging dialogue between participants; when only assessment was offered we could forward the other to further deliberation.
In addition, the question of how appropriate it is for scientists to actively communicate the possible implications of their work provided a basis for thinking about how research becomes ‘known’ and is thus able to be evitable. While originally for the case of the IL-4 mousepox experiment we had focused on whether the researchers should have published their results in general, eventually we began to appreciate that participants often voiced starkly contrasting views about whether it was appropriate to make a distinction between the audiences for the dissemination of results. Just whether researchers are compelled by current funding mechanisms or obliged because of their social responsibilities to ‘publicize’ the implications of their research through non-specialist journals was a topic on which contrasting accounts were routinely offered. By examining the underlying assumptions about what publications provided and who should be regarded as an appropriate audience, we were able to examine and publicly question the all-or-nothing framing often given to initial questions of whether ‘contentious’ research should be published.
Thus, we were able to challenge the evaluations given without doing so in a directly confrontational manner. This had beneficial implications within the specific setting of university seminars. The first few responses in each session were often given by senior participants; further in many cases these responses were lengthy, expressing definitive positions, and often politely dismissive of dual use concerns. Through the strategy of questioning employed though, it was possible to publicly scrutinize the assumptions informing them and their ultimate validity. In this way and others touched on above, our seminar design with these ‘technical’ elites differed significantly from other approaches in elite interviewing that suggest the need avoid challenging authority so as to maintain access or that in practice take information given by elites in an unquestioning manner (Kezar, 2003).
As a final note for this section, it follows from the argument above that we as facilitators did not strive for the type of substantive ‘neutrality’ often stipulated as part of running successful focus groups (e.g., Krueger, 1998). Since much more was sought here than the eliciting of views about products or services, much more was required than merely asking questions and ensuring the participants kept to them. To the extent neutrality was sought, it was sought in the form of a commitment to inquiry rather than advocacy. The extent to which it was achieved was a joint accomplishment between us and participants.
1   2   3


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page