Measuring science, seeing virtue

 

A blog by Lilia Moreles-Abonce, Georgia Christie, and Katinka Hunter-Morris

 

 

A place like Imperial College – a leading global institution – has to be successful. We therefore need to know and understand, as a community, what we mean by success. What do we mean by ‘good science’ or ‘good scientists’ and are there conflicts between institutional and personal successes? What are the different criteria we should use to measure this? Actually, can we even measure these criteria, even if we can decide on them? Indeed, for a high-impact and competitive institute like Imperial, how best can we include the human touch within our metrics?

 

These questions and more were explored in the Friday Forum, ‘Measuring Science, Seeing Virtue’ held at Imperial College in December 2024. Around 60 people attended, from a range of academic disciplines and faculties. First there was an engaging discussion between three panellists, which was followed by a very lively audience Q&A.

 

Our panellists were:

 

  1. Mary Ryan (Vice-Provost for Research and Enterprise, and Armourers and Brasiers’ Chair for Materials Science, at Imperial College). Well-experienced because of her current role, and her previous career, Professor Ryan stressed at once the need to have different methodologies for measuring different sorts of success. She also stressed, as a senior academic, her view that the act of measuring success, and its problems, is a community issue that matters deeply.

 

  1. Stephen Curry (Emeritus Professor of Structural Biology). He served as one of seven College Consuls, and as Imperial’s first Assistant Provost for Equality, Diversity, and Inclusion. Professor Curry emphasised that these issues are complex as well as important. Thus, events such as this Friday Forum, where staff can debate together the collective criteria for success, are to be welcomed.

 

  1. André Spicer (Executive Dean of the Bayes Business School). Professor Spicer reminded us that an important aspect of measuring ‘success’ is simply leadership: high performance occurs when there is a high level of trust within the university. 

 

Once the panellists had been introduced, each gave an opening statement of around eight minutes. Professor Ryan elaborated on different dimensions to measurements of success in science. Of course, as Professor Ryan said, university rankings are a factor here, because they have such a strong influence on how a university is perceived. And there are links also to the necessity of Imperial making sure it delivers what it promises. This suggests a commitment to mission and to objectives: naturally the extent of their achievement must be evaluated, which calls for measurement. But probing further, she noted that there seems to be a lot of measuring of the ‘whats’ but less of the ‘whys’. We must always remind ourselves that we tend to measure to measure the obvious things, and so are at danger of missing more elusive aspects of the academic life.

 

As a final note, Professor Ryan emphasized that we shouldn’t have a single, or simple sense of the measurement of the ‘life scientific’, as there are so many criteria and so many methodologies for looking at what scientists are doing. She therefore ended her talk by inviting the attendees to ask themselves constantly: “How do we make sure that as a community we focus on ‘what really matters’,  and how do we make sure we seek always the right measurements for supporting  ‘what really matters’?

 

Now it was Professor Curry’s turn. Imperial, he reminded us, is a global top ten university with constantly growing impact. With this success, he said, goes a responsibility to take a sufficiently complex view of rankings. He suggested that a fixation on output – on results –must always be complemented by an appreciation of the quality of the scientific process.. Professor Curry reminded the audience that success, as a concept, is in truth somewhat elusive. There is the risk that the more we make success the target, the more we will miss the real thing. Success, he said, is multidimensional, and he talked of ‘weaving into the system’ good training, good leadership, and good mentoring. Get this wrong, and the human cost is high.

 

Please keep your values to the forefront, urged Professor Curry. After all most people come into science with a desire to make positive changes. Yet trying to hold onto one’s values throughout a scientific career is difficult. Ours is a competitive environment, he said, though also one that always we hope will be supportive. Professor Curry ended his talk by urging the audience to see the importance of talking to one another about what we might consider worthwhile, and worth cherishing, in the university life.

 

Professor André Spicer began his talk by describing the case of a seemingly prolific Spanish scientist, who was found to follow questionable practices in order to generate a remarkably high research output. With this example, Professor Spicer touched on the recent research showing that while overall research effort is increasing, the amount of significant ‘discovery’ does not have the same upwards trajectory. Such a proliferation of research may ‘pay off’ for individuals, said Professor Spicer, because of the influence of simple metrics, but overall the problem becomes the issue of people confusing the impressive career metric with the actual goal.

 

This is the dynamic we must be cautious about. The wrong kind of incentive, perhaps those that produce a simple ranking of scientists, leads to researchers getting misdirected in their focus, who thus succumb to cheating, the gaming of systems, and the ignoring of long-term thinking. More optimistically, Professor Spicer offered us six steps that can be followed to keep us from such destructive habits:

 

  1. Get people involved in the development of measurement strategies.
  2. Focus on narratives, as these tend to create long term measures, giving people a sense that variables – criteria of success – are attainable, given perseverance and diligence.
  3. Loosen the relationship between incentives and measurement.
  4. Use a wider range of measures.
  5. Add some strategic uncertainty to the metrics, to help people avoid an over-fixation on simple criteria, and to discourage ‘gaming’.
  6. Focus on careful metric design, so that the techniques and objectives of measurement are simple, fair, available, immediate, and reliable.

 

Nicely set up by these fascinating talks, we moved on to the audience Q and A.

interesting points that were brought up: we heard more about Key Performance Indicators (KPIs), interdisciplinarity, and the importance of relationships. The first question asked how social sciences, and particularly a narrative approach to research, could ever fit into evaluation metrics at Imperial. Professor Curry noted that this question is indeed important, as the amount of social science research at Imperial is increasing. Actually, Professor Curry pointed out, it might be that the social science capability we have at Imperial will become a resource to draw on when we evaluate impact and create metrics. On the same theme, Professor Ryan pointed out that if we only look at research’s impact after it is achieved, we automatically limit our vision, and fail to ‘see’ the quality, or the virtue, in science.

 

Another member of the audience defended KPIs, claiming that they can drive ambitions and assist strategy. Professor Spicer’s response was that balance is key in this: we need to weave between short term goals and long-term strategy. Professor Ryan agreed, and drew our attention to the difference between KPIs and outcomes.

 

Next, an audience member brought up the importance of relationships in the world of research. How do we maintain these relationships, and how do we measure them? All three panellists agreed that this is indeed an important question, touching on why we enjoy science in the first place, and all panellists agreed that this question is a difficult one to answer. Professor Ryan pointed out that relationships are not time-limited, and so differ from the way we organise science projects. This clearly is a problem when we try to recognise the importance of relationships in science. For example, a relationship might mature into some creative understanding of a technique or theory, and promise great things, while meantime the project time frame has long since finished, and the funding. Has dried up.Professor Curry said that the work of relationship-building within research is often hidden and underappreciated, and varies very much across Imperial. Professor Spicer also reminded us that in science we tend to look at the impact of relationships transactionally, as matters of exchange, but instead we should look at them in a richer, more human, more dynamic way.

 

The next question asked how a scientist’s doubts about metrics might affect their strateghy for achieving promotion, especially when different departments have very particular expectations. Professor Ryan said that of course it makes sense that academics want to know what to do if they are to be promoted. Criteria, she said, are broadening all the time. She for instance mentioned that the way we think about promotion is shifting from a focus on research, to a focus on teaching. Professor Spicer added that we shouldn’t be making promotions based on some numerical metrics, whether via research or teaching, but must depend on a constantly re-elaborated set of criteria.

 

 

 

This Friday Forum itself was a success. The meeting facilitated a fascinating discussion, exploring the topic of how a leading STEMB institution, like Imperial, should measure itself. We saw for ourselves, as we talked over lunch, and then gathered for the discussion, the importance of creating spaces for in-person gatherings. Within the short hour that is a Friday Forum, diverse voices from academic disciplines and faculties could make their points and in turn draw comment. Imperial clearly has both the incentive and the internal gifts that to allow it to take a nuanced and sensitive approach to defining, seeing, and pursuing ‘success’ in science.