About the Author(s)


Robert J. Scholes Email symbol
Global Change and Sustainability Research Institute, University of the Witwatersrand, South Africa

Gregory O. Schreiner
Council for Scientific and Industrial Research, South Africa

Luanita Snyman-Van der Walt symbol
Council for Scientific and Industrial Research, South Africa

Citation


Scholes, R.J., Schreiner, G.O. & Snyman-Van der Walt, L., 2017, ‘Scientific assessments: Matching the process to the problem’, Bothalia 47(2), a2144. https://doi.org/10.4102/abc.v47i2.2144

Note: This paper was initially delivered at the 43rd Annual Research Symposium on the Management of Biological Invasions in South Africa, Goudini Spa, Western Cape, South Africa on 18-20 May 2016.

Original Research

Scientific assessments: Matching the process to the problem

Robert J. Scholes, Gregory O. Schreiner, Luanita Snyman-Van der Walt

Received: 03 Aug. 2016; Accepted: 15 Nov. 2016; Published: 31 Mar. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: The science–policy interface process – known as a ‘scientific assessment’ – has risen to prominence in the past few decades. Complex assessments are appropriate for issues which are both technically complicated, multifaceted and of high societal interest. There is increasing interest from the research community that studies biological invasions to undertake such an assessment.

Objectives: Providing the relevant background and context, the article describes key principles and steps for designing, planning, resourcing and executing such a process, as well as providing evidence of high-impact assessments enhancing scientific careers.

Method: Experience from international and national assessments, most recently the South African scientific assessment for the Shale Gas Development in the Central Karoo, was used to develop this guiding generic template for practitioners. Analyses of researcher publication performances were undertaken to determine the benefit of being involved in assessments.

Results: The key success factors for assessments mostly relate to adherence to ‘process’ and ‘governance’ aspects, for which scientists are sometimes ill-equipped. As regards publication outputs, authors involved in assessment processes demonstrated higher H-indices than their environmental scientist peers. We have suggested causal explanations for this.

Conclusion: Effectively designed and managed assessments provide the platform for the ‘co-production of knowledge’ – an iterative and collaborative process involving scientists, stakeholders and policymakers. This increases scientific impact in the society–policy domain. While scientists seem concerned that effort directed towards assessments comes at the detriment of scientific credibility and productivity, we have presented data that suggest the opposite.

Introduction

It is a tenet of rational management that those responsible for making decisions should periodically pause to assess the nature of the problem to determine whether they are on course to address it in the most effective manner. This is particularly true for complex problems, where the understanding of the causal factors is often imperfect and where there is a low level of agreement on how best to intervene, or what capacity is required to do so. This is the domain of ‘adaptive management’, which has as an essential element a reflexive loop, in other words, the necessity to gather information and use it to critically test both the management actions which have been taken and the assumptions which underpin those actions (Norton 2005).

‘Science’, as a distinct human activity, dates from the Age of Enlightenment, about 400 years ago. For the purposes of this article, we define science as an activity that attempts to understand the behaviour of a subject through the development of an explicit and coherent body of theory and its testing against evidence provided by observation and experimentation. This definition can encompass both natural and physical sciences as well as human sciences. For much of its history, science was perceived to be conducted by scientists, more or less exclusively for scientists, the so-called ‘Mode 1’ science [Gibbons et al. 1994; Ziman 2000, also see various critiques of this simplification, such as by Hessels and Van Lente (2008)]. Especially during the 20th century, society has come to consider science (and the associated technology) as the pre-eminent means of resolving many problems and challenges (Stokes 1997), and governments became the dominant funder of science. As a result, science is collectively expected to address questions posed by society, and report back to society – in intelligible terms – on those questions. This has been referred to as ‘Mode 2’ science.

The process of feeding the findings of scientific research into society was initially conceived as ‘information transfer’, a one-step, unidirectional communication exercise, from ‘researchers’ to ‘users’. By the 21st century, weaknesses in the unidirectional transfer model in addressing problems with both a high technical complexity and a high level of social interest had become clear. The science–society (or ‘science–policy’) interface has since been recast as an iterative, multi-way dialogue, and the generation of knowledge as a shared process between researchers, practitioners and stakeholders, a so-called ‘co-production’ of knowledge. This has required a much closer collaboration between the natural sciences and social sciences in an integrated and participatory manner, dubbed ‘transdisciplinarity’. In the most recent iterations of transdisciplinary scientific assessments, there have also been attempts to incorporate knowledge systems other than science as defined above, such as indigenous or local knowledge (Ash et al. 2010).

Scientists and other technical experts have long been requested to assess particular issues of societal concern, but it is only since the last decade of the 20th century that scientific assessments have focused on transdisciplinarity and have become institutionalised and highly organised. Transdisciplinarity evolved from single discipline, multi- and interdisciplinarity approaches to address societal issues in an inclusive and holistic manner (Nicolescu 2002). In this ongoing process, what is meant by the term ‘assessment’ has also morphed, from a process segregated by research discipline, with each study carried out by one or a few domain experts, to a highly elaborate process, conducted collaboratively by hundreds of experts across many disciplinary domains. Several modern scientific assessments have critically influenced policy, for instance, in the arena of climate change (the Intergovernmental Panel on Climate Change assessments), atmospheric pollution (the Ozone Assessment) and resource conservation (the Millennium Ecosystem Assessment).

Globally, and in South Africa, an important societal question relates to the invasion of alien species (i.e. those originating elsewhere) into native ecosystems, and its impact on the capacity of ecosystems to provide services such as water (Enright 2000; Le Maitre et al. 2002; Van Wilgen et al. 1997), agricultural resources (Paini et al. 2016; Pimentel et al. 2001) and a suitable habitat for native biodiversity (Clusella-Trullas and Garcia 2017). There is also uncertainty of the extent to which invasive alien species impact local livelihoods (Le Maitre et al. 2002; Shackleton et al. 2007), diminish natural capital, destabilise ecosystems and jeopardise economic productivity (Richardson & Van Wilgen 2004). The issues associated with invasive alien species transcend biophysical, social and economic disciplinary domains and in some cases have led to strongly polarised views in the scientific community and broader society (Zengeya et al. 2017). Examples of controversial issues in South Africa include the desirability of maintaining populations of alien trout in streams where they provide sport fishing but eat or compete with endangered native fish species, and the necessity to control species such as the jacaranda tree, which although alien and mildly invasive has become iconic to the heritage and aesthetics of some South African cities. The resolution of such issues may be aided by applying a scientific assessment approach. The Alien & Invasive Species Regulations (South Africa 2014) call for a periodic ‘status report’, which has some of the attributes of a scientific assessment, to inform adaptive management strategies. This article was sparked in part by the intention of the research community which studies biological invasions to undertake such a process (Wilson et al. 2017). What would be the design of a suitable process? Who decides what questions must be addressed? How should the teams be appointed and represent a balance of views? How do you know when you are done? This article draws on experience in a range of assessments, most recently Shale Gas Development in the Central Karoo: A Scientific Assessment of the Opportunities and Risks, to provide some guidance on science–policy processes in general (Scholes et al., 2016).

A spectrum of assessments styles

The literary definition of ‘assessment’ is ‘the evaluation or estimation of the nature, quality, or ability of someone or something’ (Oxford English Dictionary 2016). Within technical domains, assessments could be defined as a critical evaluation of information, with the aim of guiding decisions on an issue of public interest. A more pertinent definition of the large, complex assessments which have emerged over the past few decades was offered by Walt Reid, the director of the Millennium Ecosystem Assessment (MEA 2005): ‘a social process designed to bring the findings of science to bear on the needs of decision-makers’.

Scientists are used to evaluating evidence, but can do so in a range of ways, with different audiences, purposes and methodologies (Table 1). The procedures for writing an ‘expert report’ or a ‘scientific review’ are relatively well understood and part of the training of most researchers; therefore, this article focusses on the less familiar area of ‘complex assessments’.

TABLE 1: Three broad types of assessment, with different attributes, audiences, processes and outcomes.

The first of the modern assessments (Figure 1) of a complex, socially important problem is usually considered to be the Ozone Assessment (WMO 1985), although it too had antecedents. This assessment paved the way for the adoption of the ‘Montreal Protocol on Substances that Deplete the Ozone Layer’ in 1987 (UNEP 1987). Its success inspired the formation of a permanent assessment body for climate change in 1990, called the Intergovernmental Panel on Climate Change (IPCC). This body was thus established even before the United Nations Framework Convention on Climate Change was signed. Its first assessment (IPCC 1990) was more like an ‘expert report’ than a complex assessment as contemplated here (Table 1); however, by the time of the second assessment (IPCC 1995), the general form and process had evolved (and still continues to evolve) into the shape now widely copied, with adaptations, in many domains. The successive climate assessments (IPCC 1990, 1995, 2001, 2007, 2013) are credited with making possible the historic agreement – by 195 countries in Paris in December 2015 – to take concerted action on climate change. Although the topic of climate change becomes ever-more technically complex, the level of societal understanding of the problem and consensus on the need for action has increased over the past two decades, and the credibility of voices which dispute the reality of climate change or the necessity to curb it has decreased (Cubasch et al. 2013).

FIGURE 1: Complex assessments undertaken internationally and in South Africa.

The Millennium Ecosystem Assessment (MEA 2005) set out to bring the power of the assessment process to the problems of biodiversity loss and ecological degradation by applying the then novel conceptual lens of ‘ecosystem services’. Ecosystem services are the benefits which nature provides to people (Ash et al. 2010; Daily 1997), a complement to the then prevalent view of nature being a separate domain from people, and the protection of nature from people was an ethical rather than utilitarian imperative. The conceptual framework for the MEA (2005) viewed people as integral parts of ecosystems and focused on the linkages between ecosystems and human well-being. The MEA was frustrated in its attempt to be constituted as a formal intergovernmental process and was instead supported by funding from philanthropic foundations, non-governmental organisations and the private sector. This meant that the MEA did not have a direct governmental authorising and receiving environment. It was nevertheless very influential, including in government-linked environmental- and conservation agencies, as well as in the scientific domain and private sector.

Not being an intergovernmental process, the MEA had the advantage of being free to experiment on how to conduct an assessment to a much greater degree than is usually possible in more formalised contexts. The outcome was a much more diverse body of experts (including many more females, developing countries and early career researchers than is typical, for instance, in the IPCC where the experts are nominated by their governments). The MEA also made pioneering attempts to conduct assessments at a range of scales (e.g. three scales in Southern Africa; Scholes & Biggs 2004) and to include traditional and local knowledge (Reid et al. 2006). The success of the MEA led to the formation of a permanent intergovernmental body for assessing biodiversity and ecosystem services, the Intergovernmental science–policy Platform on Biodiversity and Ecosystem Services (IPBES), in 2012. IPBES has a programme of several assessments over the next decade, including an assessment of pollinators and pollination (IPBES 2016) and the currently underway global assessments on land degradation and restoration, coupled with four regional assessments of Africa, the Americas, Asia-Pacific, Europe and Central Asia (to be delivered in mid-2018), and the global assessment of biodiversity and ecosystem services (2016–2019).

Inspired by the exposure of one of the authors (R.J.S.) to the IPCC and the MEA, the formal assessment approach was first applied in a purely South African context to the highly polarised topic of elephant management in protected areas (Scholes & Mennell 2009). This transdisciplinary assessment actively engaged topics such as ethics, law and economics within the ‘science’ assessment. The outcome was that the contestation around the issue has substantially subsided and the ‘Norms and Standards’ for elephant management (South Africa 2008) were published in the Government Gazette. An assessment of another important, complex and contentious South African issue – shale gas development using ‘fracking’ technologies – was completed in November 2016 (Scholes et al., 2016). The shale gas assessment involved over 200 authors and peer review experts to assess 17 wide-ranging topics raised by society and local residents of the Central Karoo as being of importance. This included an assessment of all the material social, economic and biophysical issues associated with shale gas development and also included a coverage of topics which were relevant at a national scale such as energy planning and greenhouse gas emissions, to those more relevant at a local scale such as biodiversity, water, micro-economics, social fabric and ‘sense of place’ values (see full publication at http://seasgd.csir.co.za/scientific-assessment-chapters/).

Paying attention to process

Diligent application of the guidelines summarised in this article does not guarantee a successful assessment, but experience shows that the outcome usually constitutes progress on a formerly intractable problem. Research into the success factors associated with assessments can be summarised into three overarching principles: legitimacy, saliency and credibility (Ash et al. 2010). The three principles are all to a greater or lesser extent advanced by the inclusion of multiple stakeholders, participatory processes and transparency.

Legitimacy means that the assessment was asked for by a body which has a mandate to take action on the topic. This is called an ‘authorising environment’. In its absence, there is a high likelihood that the findings will simply be ignored. For example, the Global Biodiversity Assessment, which was completed in 1995 and involved inputs from more than 1500 scientists, was developed largely independently of the intended audience, that is, national governments. As a result of failing to find a legitimising environment, the report was not welcomed and largely ignored, despite its high technical quality (Raustiala & Victor 1996). Legitimacy also stems from being perceived to have implemented an unbiased process which considers appropriate values, concerns and perspectives of different actors, and corresponds with political and procedural fairness. Thus, the process must include appropriate people and organisations within the project governance structures and follow a pre-agreed set of rules which regulate the process.

Saliency means answering the pertinent questions and only the pertinent questions, fully and in the way the stakeholders pose them and understand them (as opposed to what scientists think stakeholders want to know). As a result, assessments are place- and time-appropriate and specific. The now widely applied process of preparatory ‘scoping’ of the assessment helps in this respect, but by itself does not automatically guarantee saliency, unless it embraces the ‘co-generation of knowledge’ approach of deliberately, actively and iteratively stimulating a dialogue between stakeholders and knowledge holders about what the key questions are and how to frame them. This engagement needs to be sustained throughout the assessment process and culminates in a formal acceptance by the stakeholders that their questions have been addressed (which does not mean that the answers have necessarily satisfied their preferences or value positions).

Credibility means meeting the standards of scientific rigour and technical adequacy. The sources of knowledge in the assessment must be considered trustworthy and independent of ‘interests’. Appointing experts who are widely acknowledged as having appropriate and leading knowledge and experience for the topic, and following a rigorous peer review process – especially with respect to traceability – is essential. The credibility and experience of the assessment leaders is an important factor in attracting, leading and retaining a team of experts and managers capable of running and delivering a high quality of work on large and complex assessments.

Interested parties can become involved in an assessment in several ways. These include participation in the assessment process as an expert author or peer reviewer, participation through being part of the governance structures and participation as a general stakeholder. The most important mechanisms for general stakeholder engagement in an assessment process are presented here.

  • Consulting stakeholders at the outset to determine the need for an assessment and the key issues to be addressed in the assessment, and continuing this dialogue throughout the assessment. Deciding on the questions or topics is usually an iterative (‘co-determined’) process because they have to be both salient and amenable to scientific investigation. Their intent and scope need to be clear and agreed to by both sides. The first formalisation is often in the Terms of Reference provided by the client or funders. The next important milestone is the ‘Zero Order Draft’ including the comments it elicits from stakeholders, and how these are incorporated in its final form. The Zero Order Draft (an expanded, annotated table of contents that provides an overview of the issues to be assessed, usually down to two or three levels of subheadings) is produced through the first author workshop and is often associated with or followed by stakeholder meetings, which consider the Zero Order Draft and suggest revisions – a process of ‘co-generation’.
  • Providing regular information releases to stakeholders through a user-friendly website or newsletter. This includes access to documents describing the assessment methodology, the names of experts undertaking the assessment and the ‘process document’ which describes how the assessment will be managed, how the work teams will be organised and how the process will be governed from inception through to completion.
  • The ‘First Order Draft’, produced about 6 months after the Zero Order Draft is approved, has near-complete (but still tentative) text and sketch versions of the graphics. It goes out for review by two or three independent domain experts, and the revisions resulting from responding to their comments lead to the ‘Second Order Draft’, which is released for comment and review by general stakeholders or any interested person. This allows stakeholders to substantively engage with the draft findings and have the opportunity to comment and provide additional evidence or material which may influence the final draft (Figure 2).

FIGURE 2: Process flow for a complex assessment with three review steps and a documented comment and response process.

The social processes of discussion and engagement involved in assessments – if well designed and implemented – usually lead to a convergence of opinion rather than potentially damaging polarisation, both at the level of the expert assessment teams and at the level of the stakeholders participating in the process. Extreme positions get isolated by their inability to supply evidence or an unwillingness to follow its logical outcomes (Ash et al. 2010). The co-generation of the questions – through transdisciplinary processes – leads to a high level of buy-in by the authorising bodies. Along with the highly transparent, public nature of the social and participatory processes, it makes disregarding the results of the assessment difficult and politically risky. By separating what is ‘known and agreed’ from what is ‘partly known, but uncertain’ and from what is ‘speculated and contested’, assessments focus on the necessary social discussion and contestation where it belongs: on the issues which are not technically clear-cut. These are frequently fundamentally value-based issues and therefore would be inappropriate to be decided by a purely technocratic process which does not take cognisance of the plurality of value systems inherent in most societies and hence assessment processes (Flyvbjerg 1998).

The credibility of ‘complex assessments’ depends on widespread agreement that they have been implemented in an unbiased manner. The strategies to minimise bias in the process are presented below.

  • Appointing, in a balanced and transparent way, and with the approval of representative governance structures, a diverse multi-author team of experts with a broad range of disciplinary and geographical experience. Each multi-author team addresses a specific topic or issue in the assessment in an integrated manner, working in collaboration with the teams working on other topics in order to effectively recognise transdisciplinary linkages. The objective is not to find totally unbiased authors (philosophically, such a person probably does not exist, although a blatantly biased author is unlikely to pass governance approval), but to include the full range of legitimate expert opinion in the topic team, and trust that debate within the multi-author context balances out individual biases and blind spots, revealing rather than concealing the range of expert opinion and thus giving confidence in the robustness of the findings.
  • Designing a rigorous, transparent (documented and in the public domain) and comprehensive (two- or three-loop) review process (see Figure 2).
  • Ensuring traceability of assertions to primary sources, or clearly flagging them as expert judgement.
  • Clearly stating the confidence which can be placed in top-level findings is crucial. This may take the form of statistical confidence (e.g. 95% confidence limits on numerical data) but more usually follows a qualitative approach evaluating the amount of evidence and the amount of technical agreement, which are reflected in specific ‘reserved’ words used for this purpose in high-level summary statements (Mastrandrea et al. 2010).

Who governs the assessment, and how?

A detailed ‘process document’ which establishes the rules at the outset of the process should be drafted and approved by the board and made available to all stakeholders and expert participants. The process document should include the following:

  • need and purpose of the assessment, including a ‘mission statement’
  • geographical and topical scope of the assessment
  • governance structures (Figure 3) and mandates associated with each group
  • assessment methodology, meeting attendance requirements and estimates of effort required
  • composition of the multi-author teams if they have been appointed or at least an indication of the intended coordinating lead authors (Figure 3)
  • approach to the peer and stakeholder review process
  • process timelines and deliverables
  • communications plan including responsibilities and confidentiality requirements until the assessment is approved.

FIGURE 3: A typical governance scheme for a large assessment.

The ‘rules’, as described in the process document, are gently reinforced throughout the process so that there is full transparency and accountability among the participants and those coordinating the assessment. As participation in an assessment is usually voluntary, there is no real sanction other than peer pressure and the personal alignment between the individual participants and the philosophy, mission and values of the assessment.

Funding for assessments might come from a single source, such as an international donor or a national government, or from a coalition of sources. In either case, it is important to insist on independence with respect to the process and findings. The donors or clients are involved in setting the broad objective (the mission and scope), but do not pose the questions or censor the answers. An Assessment Board (other descriptive words can be used if ‘Board’ is inappropriate) representing the stakeholder community is appointed to oversee those processes. The final draft of the assessment is accepted by the Board, not the funders, in order to insulate the authors from the perception that they may have had to tailor their findings to the desires of those who control the purse strings. During an assessment process, it is highly desirable when the main source of funding for an assessment has come from abroad that local funders are invited to contribute as well. This will not only add resources but, more importantly, will add legitimacy and provide an opportunity to gain the trust, commitment and engagement of relevant stakeholders (Lucas, Rausepp-Hearne & Blanco 2010).

The responsibility of the Board is to ensure that the assessment is independent, salient (to the point), thorough and balanced. The Board is ideally drawn approximately equally from the various stakeholder clusters, rather than being ‘representative’ in any quantitative sense. For instance, it may include two to four respected and experienced people from each sector, such as government, donors, NGOs, private sector and the research community. The Board meets about three times during the assessment: to approve the proposed author teams and process document, to approve the Zero Order Draft and to accept the Summary for Policymakers of the final draft (simultaneously signing off on the authenticity and adequacy of the review processes). The key meetings are best held physically, but issues arising between meetings can be dealt with remotely. The Board’s job is not to write or edit the assessment text. In the event that it is not salient, credible and legitimate, the Board can refer the assessment back to the author team until the following criteria are satisfied:

  • Has the assessment process followed reasonably in line with the guidelines set out in the process document?
  • Do the author teams have the necessary expertise and show balance between credible ranges of opinion?
  • Does the assessment (as ‘contracted’ by the Zero Draft) cover the material issues and key stakeholder questions?
  • Are the identified expert reviewers independent, qualified and balanced?
  • Have the review comments received from expert and stakeholder reviewers been adequately addressed and have the responses been adequately documented (especially in the case where a review comment is partially or fully rejected)?
  • Is the assessment couched in understandable language, supported if necessary by a glossary, and its presentation professional?

Members of the Board are not appointed as ‘representatives’ of their organisations in a narrow sense. They are expected to reflect the breadth of interests in their various sectors. Membership of the Board disqualifies the members themselves from being assessment authors or expert reviewers, but does not disqualify their organisations from providing authors, expert reviewers or stakeholder review comments. Nor does it in any way preclude those organisations from other avenues of expressing their opinions on the topic matter, such as speaking to the media or challenging decisions on the topic in court.

Planning and resourcing assessments

Even large, complex assessments are fundamentally a series of smaller, but critically important, social processes and engagements. This relates to the manner in which donors or clients are interacted with, Board members are solicited and consulted, and author teams are recruited and managed; peer reviewers are responded to; and stakeholders are engaged.

Throughout the assessment, it is critical to have responsive, adaptable and positive interaction between the topic experts and the assessment panel and the Secretariat. The Board should not directly engage with authors, but is briefed by the Panel, usually represented by the assessment leader or leaders (Lucas et al. 2010). Experts are asked to be involved in assessment as domain specialists and may themselves not be familiar with transdisciplinarity, effective personal interaction or project management. It is the responsibility of the assessment leaders to appoint coordinating lead authors who are both respected experts in their own right and also have the capacity to manage a team of domain experts to deliver outputs, on time and within the brief, through a process which may feel strange or unfamiliar. This is done largely by example, positive and regular interaction at author meetings and good-spirited cajolement by the leadership.

As in any large participatory process, conflict may arise. It is important to establish clear avenues for conflict mediation in the process document. The process document must also include clear workplan with associated outputs and participant meeting schedule. An assessment is not an academic exercise with flexible deadlines and outputs. Given their importance, scope and complexity, assessments need to be project-managed with detailed attention to process, including timelines and deliverables. The process document must clearly outline the roles and responsibilities of the various participants in the assessment to minimise confusion of mandates (Table 2).

Table 2: The roles of various participants in a ‘complex assessment’.

What is in it for participants?

The tendency in complex assessments is for the knowledge-holder participants (authors and expert reviewers) to be unpaid volunteers. Only their out-of-pocket expenses are met, for travel and accommodation to author meetings. This has two advantages: It makes the use of large multi-author teams affordable and it insulates the participants from the accusation that they have been bought. This guideline is best applied consistently but sensitively: Some key experts may be self-employed and thus may not be able to donate a large fraction of their time to the ‘public good’. Token payments, typically at far below commercial rates, can be made on a case-motivated basis. The Secretariat are paid, and a token contribution to the costs of key human resources, such as the assessment leaders and coordinating lead authors, who give a significant amount of time, is usually made at the discretion of the leaders and the Secretariat.

Given that there is no appreciable financial reward, why do participants agree to get involved in assessments? There are different benefits for different participants. Firstly, most assessments are agreed to be ‘in the public good’, that is, they address a national or international problem which carries significant societal concerns. For stakeholders and governments, assessments can resolve logjams in otherwise intractable issues, reducing the polarisation and making policy decisions possible despite their initial unpopularity among some stakeholders. Organised and street-wise stakeholder groupings may fear co-option by the assessment process, but if they refuse to participate they risk being perceived by decision-makers and the broader public as irrational special interest groups. Often assessments do succeed in finding win–win outcomes on controversial issues, or at least fewer win–lose and lose–lose outcomes.

For the chairs, Secretariat, authors and peer reviewers, assessments offer the opportunity to be exposed to a novel, transdisciplinary way of doing science and influencing important policies which affect society. The chapters of the assessment are typically citable peer reviewed publications which advance the scientific profile and career of participants far beyond what might be offered by traditional consulting work or the skills development programmes offered in universities. Universities often do not equip their students with the skills they need to be successful assessment practitioners, and their academic supervisors may actively discourage them from engagement with science–policy activities in the early part of their career, arguing that it is a distraction at a time when they need to be doing primary research and building their publication record. Experience has shown that this advice is misguided; the two are not mutually exclusive but mutually reinforcing, within reason. Researchers who are active in assessments, regardless of whether they are young researchers or established researchers, tend to outperform their non-participating peers on many metrics of impact, including publications (Figure 4). The mean author H-indices of authors from the three most recent IPCC assessments are around double that of the mean for environmental scientists as a whole and compare well to that of the top 1% of environmental scientists. Authors from the fifth IPCC assessment (2013) are presumably still in the process of publishing findings related to the assessment; therefore, their mean H-index can be expected to rise over the next few years.

FIGURE 4: Mean H-indices of authors involved in IPCC assessments from 2001 to 2013 and the top 100 environmental scientists in 2016.

Why do assessment participants score more highly than non-participants? Firstly, there is a degree of self-selection by top researchers: They feel that participation is a duty and a good way to ensure that their science is used. Secondly, for less established researchers, participation in assessments leads to close exposure to new ideas and helps create very effective research networks, spanning countries, disciplines and levels of experience. Thirdly, involvement in assessments tends to stimulate research by revealing important gaps, ideas and opportunities. Fourthly, assessment process coaches participants in delivering high quality scientific work in brief, within the prescribed methodology, and encourages scientists to engage with other domains and stakeholders to ‘co-generate’ knowledge in a manner which makes it usable for society and decision-makers, and thus more salient. Finally, it often leads to co-authorships on high-impact papers. While it is acknowledged that the association of high personal scientific impact and involvement in assessments could have other causal explanations, the evidence presented rejects the argument that being involved in assessments has a negative effect on scientific and publication performance.

Conclusions

The process of providing expert inputs to policy decisions has become an important component of many scientific careers and can range from providing an individual opinion on a simple question to a very elaborate and multi-expert process of evidence collation, evaluation and summarisation with high purchase in the policy domain. In order not to impose undue delays and costs on decision-making processes, and overburden the time of experts to such an extent that they cannot remain experts, it is necessary to judge the degree of process complexity which is necessary in each case, depending primarily on the extent of contestation in the scientific domain on the issue and the extent to which it is considered to be a problem of great societal concern with high stakes. Not being cognisant of the receiving environment runs the risk of designing an inadequate process, doing a rushed, superficial or insufficiently consultative process which could be rejected by one or more powerful stakeholders, which in the end adds more delay and cost than what would have resulted from a more considered approach.

Acknowledgements

The organisers of the 43rd Annual Research Symposium on the Management of Biological Invasions, in Goudini 17–19 May, 2016, invited R.J.S. to give a talk on scientific assessment, paid for his attendance and edited this special issue. The content of this article draws heavily on the ideas and insights of numerous inspiring colleagues in the Intergovernmental Panel on Climate Change (IPCC), the Millennium Ecosystem Assessment (MEA) and the Intergovernmental science-policy Platform on Biodiversity and Ecosystem Services (IPBES). Some are acknowledged in the references, but in many cases the ideas are unpublished, but freely shared. The authors do not wish to claim them as their own.

Shale Gas Development in the Central Karoo: A Scientific Assessment of the Opportunities and Risks was undertaken as phase 2 of the overarching Strategic Environmental Assessment process commissioned and funded by the national Department of Environmental Affairs.

Competing interests

The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions

R.J.S. contributed his experience in participating in many international and national assessment processes, most recently co-leading ‘Shale Gas Development in the Central Karoo: A Scientific Assessment of the Opportunities and Risks’ where G.O.S. was the project planner and manager. Data collection and analyses for this article were undertaken by L.S.VdW. All three authors contributed to the writing and editing of the paper.

References

Ash, N., Blanco, H., Brown, C., Garcia, K., Henrichs, T., Lucas, N., et al. (eds.), 2010, Ecosystem and human well-being: A manual for assessment practitioners, Island Press, Washington, DC, USA.

Clusella-Trullas, S. & Garcia, R.A., 2017, ‘Impacts of invasive plants on animal diversity in South Africa: A synthesis’, Bothalia 47(2), a2166. https://doi.org/10.4102/abc.v47i2.2166

Cubasch, U., Wuebbles, D., Chen, D., Facchini, M.C., Frame, D., Mahowald, N., et al., 2013, ‘Introduction’, in T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, et al. (eds.), Climate change 2013: The physical science basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, University Press, Cambridge.

Daily, G. (ed.), 1997, Nature’s services: Societal dependence on natural ecosystems, Island Press, Washington, DC.

Enright, W.D., 2000, ‘The effect of terrestrial incisive alien plants on water scarcity in South Africa’, Physics and Chemistry of the Earth: Part B 25(3), 237–242. https://doi.org/10.1016/S1464-1909(00)00010-1

Flyvbjerg, B., 1998, Rationality and power: Democracy in practice, University of Chicago Press, Chicago.

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, P., 1994, The new production of knowledge: The dynamics of science and research in contemporary societies, Sage, London.

Hessels, L. & Van Lente, H., 2008, ‘Re-thinking new knowledge production: A literature review and a research agenda’, Research Policy 37, 740–760. https://doi.org/10.1016/j.respol.2008.01.008

IPBES (Intergovernmental Platform on Biodiversity and Ecosystem Services), 2016, Summary for policymakers of the assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services on pollinators, pollination and food production, S.G. Potts, V.L. Imperatriz-Fonseca, H.T. Ngo, J.C. Biesmeijer, T.D. Breeze, L.V. Dicks, et al. (eds.), viewed 10 December 2016 from http://www.ipbes.net/work-programme/pollination

IPCC (International Panel on Climate Change), 1990, Climate change: The IPCC scientific assessment, University Press, Cambridge, viewed 10 December 2016 from: http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml

IPCC (International Panel on Climate Change), 1995, IPCC second assessment: Climate change 1995, viewed 10 December 2016 from: http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml

IPCC (International Panel on Climate Change), 2001, IPCC fourth assessment report: Climate change 2007, University Press, Cambridge, viewed 10 December 2016, from http://www.grida.no/publications/other/ipcc_tar/

IPCC (International Panel on Climate Change), 2007, Climate change 2001: IPCC third assessment report, Cambridge: University Press, viewed 10 December 2016 from: http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml

IPCC (International Panel on Climate Change), 2013, Climate change 2013: Fifth assessment report of the Intergovernmental Panel on Climate Change, University Press, Cambridge, viewed 10 December 2016 from https://www.ipcc.ch/report/ar5/wg1/

Le Maitre, D.C., Van Wilgen, B.W., Gelderblom, C.M., Bailey, C., Chapman, R.A. & Nel, J.A., 2002, ‘Invasive alien trees and water resources in South Africa: Case studies of the costs and benefits of management’, Forest Ecology and Management 160, 143–159. https://doi.org/10.1016/S0378-1127(01)00474-1

Lucas, N., Rausepp-Hearne, C. & Blanco, H., 2010, ‘Stakeholder participation, governance, communication and outreach, in N. Ash, H. Blanco, C. Brown, K. Garcia, T. Henrichs, N. Lucas, et al. (eds.), Ecosystem and human well-being: A manual for assessment practitioners, pp. 33–70, Island Press, Washington DC.

Mastrandrea, M.D., Field, C.B., Stocker, T.F., Edenhofer, O., Ebi, K.L., Frame, D.J., et al., 2010, Guidance note for lead authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties, Intergovernmental Panel on Climate Change (IPCC), viewed 10 June 2016, from http://www.ipcc.ch

MEA (Millennium Ecosystem Assessment), 2005, Ecosystems and human well-being: Current state and trends: Findings of the Condition and Trends Working Group, Island Press, Washington, DC, viewed 7 June 2016, from http://www.unep.org/maweb/en/Global.aspx

Nicolescu, B., 2002, Manifesto of transdisciplinarity, Suny Press, New York.

Norton, B.G., 2005, Sustainability: A philosophy of adaptive ecosystem management, University of Chicago Press, Chicago, IL.

Oxford English Dictionary, 2016, Assess, viewed 04 July 2016, from http://www.oed.com/view/Entry/11849?redirectedFrom=assess#eid.

Paini, D.R., Sheppard, A.W., Cook, D.C., De Barro, P.J., Worner, S.P. & Thomas, M.B., 2016, ‘Global threat to agriculture from invasive species’, Proceedings of the National Academy of Sciences 113, 7575–9. https://doi.org/10.1073/pnas.1602205113

Pimentel, D., McNair, S., Janecka, J., Wightman, J., Simmonds, C., O’Connell, C., et al., 2001, ‘Economic and environmental threats of alien plant, animal, and microbe invasions’, Agriculture, Ecosystem and Environment 84, 1–20. https://doi.org/10.1016/S0167-8809(00)00178-X

Raustiala, K. & Victor, D.G., 1996, ‘Biodiversity since Rio: The future of the convention on biological diversity’, Environment: Science and Policy for Sustainable Development 38(4), 16–45.

Reid, W.V., Berkes, F., Wilbanks, T.J. & Capistrano, D. (eds.), 2006, Bridging scales and knowledge systems: Concepts and applications in ecosystem assessment, Island Press, Washington, DC.

Richardson, D.M. & Van Wilgen, B.W., 2004, ‘Invasive alien plants in South Africa: How well do we understand the ecological impacts’? South African Journal of Science 100, 45–52.

Scholarometer, 2016a, Top 100 authors in environmental sciences by h-index, viewed 07 July 2016, from http://scholarometer.indiana.edu/explore.html

Scholarometer, 2016b, Search statistics for environmental sciences, viewed 11 July 2016, from http://scholarometer.indiana.edu/explore.html

Scholes, R.J. & Biggs, R. (eds.), 2004, Ecosystem services in southern Africa: A regional assessment, Council for Scientific and Industrial Research, Pretoria.

Scholes, R.J. & Mennell, K.G. (eds.), 2009, Elephant management: A scientific assessment for South Africa, Wits University Press, Johannesburg.

Scholes, R.J., Lochner, P., Schreiner, G., Snyman- Van der Walt, L. and de Jager, M. (eds.). 2016. Shale Gas Development in the Central Karoo: A Scientific Assessment of the Opportunities and Risks. CSIR/IU/021MH/EXP/2016/003/A, ISBN 978-0-7988-5631-7, Pretoria: CSIR.

Shackleton, C.M., McGarry, D., Fourie, S., Gambiza, J., Shackleton, S.E. & Fabricius, C., 2007, ‘Assessing the effects of invasive alien species on rural livelihoods: Case examples and a framework from South Africa’, Human Ecology 35, 113–127. https://doi.org/10.1007/s10745-006-9095-0

South Africa, 2008, National environmental management: Biodiversity Act (10 of 2004): National norms and standards for the management of elephants in South Africa, (Notice 251), Government Gazette 30833, 3, 28 Feb.

South Africa, 2014, National Environmental Management: Biodiversity Act (10 of 2004): Alien and invasive species regulations. Government Gazette 37885, 598, 01 Aug.

Stokes, D.E., 1997, Pasteur’s quadrant: Basic science and technological innovation, Brookings, Washington DC.

UNEP (United Nations Environmental Programme), 1987, The Montreal protocol on substances that deplete the ozone layer, viewed 04 July 2016, from http://ozone.unep.org/en/handbook-montreal-protocol-substances-deplete-ozone-layer/5

Van Wilgen, B.W., Little, P.R., Chapman, R.A., Görgens, A.H.M., Willems, T. & Marais, C., 1997, ‘The sustainable development of water resources: History, financial costs, and benefits of alien plan control programmes’, South African Journal of Science 93, 404–411.

Wilson, J.R.U., Gaertner, M., Richardson, D.M. & van Wilgen, B.W., 2017, ‘Contributions to the national status report on biological invasions in South Africa’, Bothalia 47(2), a2207. https://doi.org/10.4102/abc.v47i2.2207

WMO (World Meteorological Organisation), 1985, Atmospheric ozone 1985: Assessment of our understanding of the processes controlling its present distribution and change. World Meteorological Organization Global Ozone Research and Monitoring Project. Report No. 16, viewed 07 July 2016, from http://www.esrl.noaa.gov/csd/assessments/ozone/

Zengeya, T., Ivey, P., Woodford, D.J., Weyl, O., Novoa, A., Shackleton, R., et al., 2017, ‘Managing conflict-generating invasive species in South Africa: Challenges and trade-offs’, Bothalia 47(2), a2160. https://doi.org/10.4102/abc.v47i2.2160

Ziman, J., 2000, Real science: What it is, and what it means, University Press, Cambridge.



Crossref Citations

No related citations found.