By Paul Jump
As 2014 race begins, reports Paul Jump, maternity provision is the only big change from draft document.
The dreaded assessment season was formally kicked off last week by the publication of the final version of the panel criteria and working methods for the 2014 research excellence framework. But any research managers who elected to hold off from gathering outputs and impact case studies until the rules of the game were finalised will not feel particularly vindicated: major revisions of the draft document were few and far between. The most high-profile change, regarding the treatment of researchers who have had a baby during the assessment period, was announced in October following “overwhelming” opposition to a proposal in the draft to limit exemptions from submitting the standard four outputs to researchers whose maternity leave exceeded 14 months.
Mothers will now be able to submit one fewer output for each period of maternity leave taken during the assessment period, which runs from 1 January 2008 until 31 December 2013 for outputs and until 31 July 2013 for the impact and environment elements of the exercise. Researchers who take more than four months of paternity or adoption leave will also be permitted to submit one fewer output without penalty. The REF team, which administers the exercise on behalf of the four UK funding councils, has also responded to calls from many of the 400 respondents to its consultation on the draft document for greater consistency across the four main panels and to reduce the amount of data institutions are required to submit. Among the resulting changes are the standardisation of terminology used in the separate panels’ guidelines where the same meaning is intended, and harmonisation of panels’ approaches to double-weighted and co-authored submissions. The document confirms that all subpanels will permit institutions to submit a “reserve” output, which the subpanels will assess if they decline institutions’ requests to give a double weighting to outputs where “the combined scale of academic investment in the research activity and the intellectual scope of the research output is considerably greater than the disciplinary norm”.
For its part, Main Panel B (physical sciences) notes that it would expect to receive such requests only “very exceptionally”. Meanwhile, the amount of information required about a submitted researcher’s contribution to a co-authored output has been reduced. Evidence that the contribution was “substantial” will not now be required except for physics papers with more than 10 authors and for biomedical papers with more than six authors where the submitted individual is not the lead or corresponding author. The only other case in which evidence of individual contribution will be required is where the same output is submitted against more than one person from the same institution to the same subpanel. Such co-submission will be limited to two people and all but Main Panel D (humanities) explicitly discourage it on the grounds that it is not the best way for them to gain what Main Panels A (bioscience) and B call “the fullest and most favourable impression of a submitted unit’s research”.
By contrast, outputs may be submitted against any number of co-authors to different subpanels. The finalised panel criteria also confirm that any gaps in expertise on subpanels will be plugged by additional assessors put forward by nominating bodies. These will be either research practitioners or “users” depending on whether they are being recruited to assess impact or outputs, respectively. They will play a “full and equal” role with other panellists and may be appointed to more than one sub-panel to help assess interdisciplinary work.
Filling the expertise gap
This provision addresses concerns about perceived gaps in expertise raised by some academics when the membership of the 36 subpanels – reduced from 67 in the 2008 RAE – was announced last February. “Clear gaps” will be filled this year, and further assessors will be appointed in 2013 if a planned survey of institutions’ submission intentions flags up any further gaps or “potential workload issues for existing members”. The finalised criteria also make clear that outputs that were pre-published before the beginning of the census period will be eligible for submission provided they were not submitted to the 2008 RAE. Regarding the impact element of the REF, Main Panel C (social sciences) has fallen into line with the rest by providing a list of examples.
It has also responded to calls for greater explication of the criteria of “reach” and “significance” by which impact will be assessed. It makes clear that reach will be not be assessed in terms of absolute numbers of beneficiaries but according to how many of the potential beneficiaries have been affected. “It is, for example, recognised that a policy issue affecting one region of the UK uniquely has that region as the potential domain for the impact, and that defines the boundaries of the possible reach achievable,” it says. It adds that reach and significance will be assessed “on a holistic basis”, rather than separately. The revised document also notes that impact case studies should include evidence that the underlying research meets the REF team’s stipulation that it should be at least 2* (internationally recognised) in quality. This may include associated peer-reviewed publications, end-of-grant reports, citation counts or details of competitive funding attracted or prizes won.
Kid gloves for top-secret work
The panels also give more detail on their proposed procedures for dealing with case studies where the impact – such as the development of new military hardware – may be made known only to people with national security clearance. Institutions must seek permission from the REF team by the end of this year to submit such case studies, describing the broad nature of the impact and the level of clearance required by potential assessors. Permission will be granted to submit where an appropriate assessor can be found. But only the outline information will be made available to the rest of the subpanel and no details about these case studies will be published. The panels emphasise that public engagement will score highly for impact only where it is based on research carried out in the submitting unit and goes beyond what Main Panel A calls “business as usual” by, for instance, actively involving audiences, informing the focus of research or creating “legacy resources”.
Main Panel C emphasises that providing audience numbers will not be enough and suggests listing any follow-up activity or evidence of sales or downloads of related materials.
Evidence of the “significance” of the outreach could include audience feedback, external reviews or “evidence of sustainability through, for example, a sustained or ongoing engagement with a group, a significant increase in participation in events or programmes or use of resources”.
Institutions that submit case studies consisting of breathy superlatives about the width of audiences’ smiles and the volume of their applause cannot say they have not been warned.
Source: Times Higher Education http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=418933&c=1