GLOBAL
bookmark

What counts for academic productivity in research universities?

Publication in high status refereed journals has become a major criterion of academic success in the competitive environment of global higher education. Appearing in internationally circulated journals published in English is especially prestigious. Universities are engaged in a global arms race of publication; and academics are the shock troops of the struggle.

At stake is placement in the global university rankings, the allocation of budgets from governments, national prestige, the ability to attract the best students and professors and a preferred place in the pecking order of academe.

It is also useful to keep in mind that the publications and rankings games are limited to a very small part of the academic system in any country.

Most universities are largely teaching institutions and have a limited, if any, research mission or profile. Only a thousand or so out of the world’s 18,000 universities appear anywhere in the international rankings.

In fact, there needs to be recognition that most universities are teaching institutions and their emphasis should be on teaching and learning – not on improving their research and publication profile.

Productivity for most of any academic system should be the measurement of effective teaching and a careful understanding of what students learn, as well as ensuring that students who enter higher education complete their studies.

Thus, this discussion is limited to a small but important minority of academic institutions.

Measuring research productivity

For research-intensive universities and the academics working in them, the measurement of academic productivity is neither straightforward nor easy.

The key function of teaching quality is seldom measured adequately – in part because the assessment of teaching effectiveness is not easy and there are not widely accepted parameters. The standard metric of asking students for their opinions in each course is widely recognised as inadequate.

Further, current debates emphasise learning as much as teaching – what ‘value added’ has a student gained as a result of his or her studies. There is little agreement about how to measure either teaching or learning.

Research universities focus mainly on research accomplishment: this is their core mission and what is key to the rankings and the achievement of high global status.

Research productivity is easier to measure than other kinds of academic work – teaching has been mentioned, and community engagement and such important functions as university-industry linkages are also difficult to define and quantify. Thus, research is not only the gold standard, but almost the only semi-reliable variable.

But even measuring research productivity is problematical. The global rankings count journals that are indexed in main global indices – such as the Science Citation Index, Web of Science or Scopus, or their equivalents for other disciplines. These indices list only a small number of journals and tend to favour publications in English – the global scientific language.

The rankings and other national evaluations also count research grants and other awards. Again, this may be appropriate for the hard sciences, but not necessarily for other disciplines. The rankings also do not take into account the vast differences among countries and academic systems in the amounts of funding available.

Neither the indices nor most universities recognise a range of other measures of productivity as well as significant changes in knowledge distribution that have taken place in recent years.

The straitjacket of the indices

The Science Citation Index, or SCI, and similar indices measure only one kind of academic productivity – that which is most common in the natural and biomedical sciences. In these fields, scientific work is in general reported in peer-reviewed journal articles that are later cited by other scientists.

For example, an up-and-coming African research university, which annually rates each professor according to productivity measures, counts a journal article in a ‘top’ international journal as double the ‘points’ granted for a successful book. A professor is expected to ‘produce’ a specified number of points annually and refereed journal articles yield the most points.

Many universities and academic systems provide payments to faculty members in recognition of research productivity. Often, the maximum payments are for articles published in peer-reviewed SCI-approved journals. Such payments may be the equivalent of a month’s salary or more – this is the case in some top Chinese universities. In some cases, these payments are added to the ‘base’ salary.

A well-known Russian university provides bonuses that can more than double the rather low base salaries – the bonuses for Russian language publications are less than half of those provided for publication in internationally recognised journals. Books or book chapters are not eligible for these bonuses.

Other disciplines may report research results in different ways. In the humanities and some social sciences, for example, books are important tools for imparting knowledge and reporting research. But it is difficult to easily calculate the impact factors or intellectual influence of books and so they are typically not counted at all.

Excluding books disadvantages those academic fields in which books remain a central element of knowledge communication – and scholars who write or edit books. The fact is books remain an important means of communicating knowledge.

Anarchy and revolution in communication

Mass higher education and information technology have both contributed to anarchy and revolution in the ways that academic knowledge is communicated.

Less than a half-century ago, the bulk of the world’s research findings and academic knowledge was communicated by a relatively small number of refereed journals and academic and commercial publishers that were widely recognised by the academic community. Most knowledge was produced and consumed in a small number of countries and universities in Europe and North America.

Although the traditional knowledge centres remain dominant, many more universities and researchers in different parts of the world are now producing quality science and scholarship – academics in China, Brazil, Russia and other countries are engaged in the global knowledge network as producers as well as consumers.

Top journals are increasingly selective and remain dominated by the main academic centres – providing limited access to others. Further, many are controlled by large multinational publishers that charge high prices for access.

Taking advantage of the internet, new ‘open access’ journals have emerged – although their quality and rigour are questionable. ‘Fake’ journals that will ‘publish’ anything, if a fee is paid, have proliferated – as have a growing number of vanity publishers that will publish books for a fee.

In short, there is much confusion and considerable anarchy in today’s knowledge communication business.

Dilemmas of research funding

Academic institutions and systems – and, of course, many of the rankings – take research funding into account when assessing academic productivity in research universities.

Obtaining funding is a valid measure of accomplishment and in some scientific fields almost a necessity for conducting research. Yet in many, perhaps most, disciplines funding is difficult to obtain and the resources available are generally quite limited. In such fields, including the humanities and most social sciences, good research can be accomplished with little external funding.

Further, funding even in the sciences and biomedical areas tends to be more available to scientists in the top-ranking universities in countries with well-developed research infrastructures. Thus, when using funding as a metric for assessing academic productivity, considerable care and sophistication are required.

How to assess academic research productivity?

The problems are clear, although usually ignored by those eager to ‘measure’ and ‘reward’ research productivity, but solutions are not. One size certainly does not fit all when it comes to assessing research productivity in particular and academic work in general.

Measures necessarily vary by discipline. Some things are easier to measure than others – articles published in mainstream scientific journals are easier to evaluate than books or various kinds of online and ‘open access’ publications.

It is probably too much to ask that care, discretion and sophistication be used when making judgements that often affect the salaries and academic futures of professors in an age of hyperaccountability.

* Philip G Altbach is research professor and director of the Center for International Higher Education at Boston College in the United States.