Thursday, 25 July 2013

A University without Faculty: The Demise of the University of Phoenix and the Rise of the MOOCs

The University of Phoenix is now the largest university in America, but this may soon change. This mostly online institution is facing an accreditation sanction, which could force it to lose its Pell Grants, student loans, and other federal subsidies. Not only is the stock price taking a major beating, but massive layoffs are underway.

Although we should not take enjoyment in other people’s job losses, it is important to focus on what happens when higher education is taken over by a soulless corporation. As the founder of the university has become a billionaire and has just received a $5 million retirement package, the school is shedding many of its on-the-ground employees. Like many other for-profit schools, the U. of Phoenix receives most of its funding from public monies, and then uses these funds to enrich administrators and shareholders and hire an army of marketers and recruiters in order to turn mostly under-represented minority students into unemployed debt slaves, and they do this by hiring all of their faculty off of the tenure system. In many ways, this school represents the extreme logic of the online education movement: eliminate tenure for the faculty, develop questionable distance education, cater to private corporations, and make students suffer with high debt levels and bogus degrees (actually very few students ever get their degrees, and very few get their promised jobs).

While online course providers like Coursera and Udacity appear to represent a much more progressive version of this high-tech education promotion, let us look at some of the statements that are coming out of the mouths of these not-for-profit, profit-seeking marketers. Here is Sebastian Thrun, founder of Udacity, from the UCLA forum (these quotes come from the rush transcript on Remaking the University): “Students rarely learn listening . . . or they never learn by listening. The challenge for us is to take this new medium and really bring it to a mode where students do something and learn by doing. And if you look at the broad spectrum of online technology with what happens. It doesn’t really take long time to point to video games. And most of us look down on video games. We’ve also played them. I know there are people in this room who play angry birds. Some people do. Some people don’t admit it. Angry birds is an wonderful learning environment because you get drawn in, you solve the physics problems but the big problem is that it stops at angry birds . . . if the angry birds was good enough to get into the masters students in physics. It would be an amazing experience and you could do this at scale.” The point I want to stress here is the claim that students never learn from listening. Following this logic, most of current education is simply useless, and we should just have students take out their smart phones and play Angry Birds all day.

During Thrun’s presentation at UCLA, this downgrading of traditional learning environments was connected to a downsizing of the faculty: “As we know that higher education is moving at a slower pace compared to the industry moves. We have been funded by a whole bunch of corporations that make the classes with us and there’s a number of classes launching soon on topics to be not covered in academia. If you look at the way the technology turns over, it will be 5-10 years in computer science [and] if you look at the way colleges turn over, it’s much more difficult because [with] tenure they are gonna be with us for 30 years so the national turnover rate for colleges is about 30 years. Industries it’s like 5-10 years. So there’s a disconnect between how the world changes and how colleges are able to keep up. Therefore in computer science it would be hard to find courses that teach technologies that are useful today such as IOS and all the wonderful things that they do. So the industries jumped in and funded us to build these classes.” According to this logic, since tenure requires a thirty-year commitment to the faculty, and industry and technology change at a much faster rate, we need to get rid of the secure faculty and replace them with student mentors and the latest technology.

Thrun’s argument fails to recognize that faculty also develop and change, and most faculty, including his own wife, now teach without tenure. His point of view also pushes the idea that technological change is always for the better, and even if it is not good, there is no way to resist it. As I have previously argued, we need to compare online courses to our best courses and not our worst, and we have to defend and define quality education and push for more funds to be spent on small, interactive classes. However, Thrun and other MOOC celebrators appear to have a disdain for their own teaching: “But in the existing classes, the level of services are often not that great. . . .I talked to numerous instructors and you divide the time the communal time and the personal time you give back to the students in terms of advising and grading . . . you can be lucky as a student for 3 credits class to get 3 hours of personal time. Many people laugh and many say I spend 10 min/student per class and the rest I give to my TAs. Charging $1,000-$4,000 for that to me is gonna be a question going forward.” Although I have often questioned what students are actually paying for in higher education, what Thrun is really questioning is the validity and value of large, impersonal lecture classes, and on this point, we are in agreement; still the question that remains is if large online courses can really provide the quality education they advertise.

At the last Regents meeting, many of these themes were continued as three computer science professors attempted to convince the UC system that online courses would make higher education “better, faster, and cheaper.” In her presentation for Coursera, Daphne Koller insisted that since students now have a very short attention span, the classic lecture has to be broken up into a series of short videos followed by an interactive question and answer system. She argued that this method paradoxically makes mass education personalized as it pushes students to constantly learn and be tested on material before they advance.

Like the other online course providers, in order to differentiate her “product” from the “traditional” model of education, Koller had to constantly put down the current way we educate students. Thus, she derided the “sage on the stage” and the inability of most students to ask questions in their large lecture classes. She also bemoaned the fact that no one wants to read students’ tests with identical questions and answers, and so the whole grading process can be given to computers and fellow students. Once again, this argument not only degrades the value and expertise of faculty, but it also treats students as if they need to be reimagined as programmable machines and free laborers. Yes, let’s have the students’ grade each other’s paper, and while they pay for their education, let us train them to work for free.

Another alarming aspect of the rhetoric of these providers is their constant reference to experimenting on students as they attempt to increase access to higher education. The idea presented at the Regents meeting is that since so many under-represented students cannot find places in the UC system, these students from underfunded high schools should be given an online alternative. Some have called this the Digital Jim Crow because wealthy students will still have access to traditional higher education, while the nonwealthy, under-represented minority students will be sent to an inferior online system. Of course this new form of educational segregation is being pushed under the progressive banner of expanding access.

Wednesday, 24 July 2013

Times Higher Education Under 50s Rankings

Times Higher Education has now published its ranking of universities less than fifty years old.
The top five are:

1. Phang University of Science and Technology
2. EPF Lausanne
3. Korea Advanced Institute of Science and Technology
4. Hong Kong University of Science and Technology
5. University of California, Irvine

They are quite a bit different from the QS young universities rankings. In a while I hope to provide a detailed comparison.

Bad Mood Rising

In 2006 I tried to get an article published in the Education section of the Guardian, that fearless advocate of radical causes and scourge of the establishment, outlining the many flaws and errors in the Times Higher Education Supplement -- Quacquarelli Symonds (as they were then) World University Rankings, especially its "peer review". Unfortunately, I was told that they would be wary of publishing an attack on a direct rival. That was how University Ranking Watch got started.


Since then QS and Times Higher Education have had an unpleasant divorce, with the latter now teaming up with Thomson Reuters. New rankings have appeared, some of them to rapidly disappear -- there was one from Wuhan and another from Australia but they seem to have vanished. The established rankings are spinning off subsidiary rankings at a bewildering rate.

As the higher education bubble collapses in the West everything is getting more competitive including rankings and everybody -- except ARWU -- seems to be getting rather bad-tempered.

Rankers and academic writers are no longer wary about "taking a pop" at each other. Recently, there has been an acrimonious exchange between Ben Sowter of QS and Simon Marginson of Melbourne University. This has gone so far as to include the claim that QS has used the threat of legal action to try to silence critics.

"[Ben] Sowter [of QS] does not mention that his company has twice threatened publications with legal action when publishing my bona fide criticisms of QS. One was The Australian: in that case QS prevented my criticisms from being aired. The other case was University World News, which refused to pull my remarks from its website when threatened by QS with legal action.

If Sowter and QS would address the points of criticism of their ranking and their infamous star system (best described as 'rent a reputation'), rather than attacking their critics, we might all be able to progress towards better rankings. That is my sole goal in this matter. As long as the QS ranking remains deficient in terms of social science, I will continue to criticise it, and I expect others will also continue to do so."

Meanwhile the Leiter Reports has a letter from "a reader in the UK".

THES DID drop QS for methodological reasons. The best explanation is here:http://www.insidehighered.com/views/2010/03/15/baty
But there may have been more to it? Clearly QS's business practices leave an awful lot to be desired. See: http://www.computerweekly.com/news/1280094547/Quacquarelli-Symonds-pays-80000-for-using-unlicensed-software
Also I understand that the "S" from QS -- Matt Symonds -- walked out on the company due to exasperation with the business practices. He has been airbrushed from QS history, but can be foud at: https://twitter.com/SymondsGSB
And as for the reputation survey, there was also this case of blantant manipulation:http://www.insidehighered.com/news/2013/04/08/irish-university-tries-recruit-voters-improve-its-international-ranking
And of course there's the high-pressure sales:http://www.theinternationalstudentrecruiter.com/how-to-become-a-top-500-university/
And the highly lucrative "consultancy" to help universities rise up the rankings:http://www.iu.qs.com/projects-and-services/consulting/
There are "opportunities" for branding -- a snip at just $80,000 -- with QS Showcase:http://qsshowcase.com/main/branding-opportunities/
Or what about some relaxing massage, or a tenis tournament and networking with the staff who compile the rankings: http://www.qsworldclass.com/6thqsworldclass/
Perhaps most distribing of all is the selling of dubious Star ratings:http://www.nytimes.com/2012/12/31/world/europe/31iht-educlede31.html?pagewanted=all&_r=0
Keep up the good work. Its an excellent blog.

All of this is true although I cannot get very excited about using pirated software and the bit about relaxing massage is rather petty -- I assume it is something to do with having a conference in Thailand. Incidentally, I don't think anyone from THE sent this since the reader refers to THES (The S for Supplement was removed in 2008).

This is all a long way from the days when journalists refused to take pops at their rivals, even when they knew the rankings were a bit rum.

Competition and controversy in global rankings

Higher education is becoming more competitive by the day. Universities are scrambling for scarce research funds and public support. They are trying to recruit from increasingly suspicious and cynical students. The spectre of online education is haunting all but the most confident institutions.


Rankings are also increasingly competitive. Universities need validation that will attract students and big-name researchers and justify appeals for public largesse. Students need guidance about where to take their loans and scholarships. Government agencies have to figure out where public funds are going.

It is not just that the overall rankings are competing with one another, but also that a range of subsidiary products have been let loose. Times Higher Education (THE) and QS have released Young University Rankings within days of each other. Both have published Asian rankings. THE has published reputation rankings and QS Latin American rankings. QS’s subject rankings have been enormously popular because they provide something for almost everybody.

There are few countries without a university somewhere that cannot claim to be in the top 200 for something, even though these rankings sometimes manage to find quality in places lacking even departments in the relevant fields.

QS’s academic survey

Increasing competition can also be seen in the growing vehemence of the criticism directed against and between rankings, although there is one ranking organisation that so far seems exempt from criticism. The QS academic survey has recently come under fire from well-known academics although it has been scrutinised byUniversity Ranking Watch and other blogs since 2006.

It has been reported by Inside Higher Ed that QS had beensoliciting opinions for its academic survey from a US money-for-surveys company that also sought consumer opinion about frozen foods and toilet paper.

The same news story revealed that University College Cork had been trying to find outside faculty to nominate the college in this year’s academic survey.

QS has been strongly criticised by Professor Simon Marginson of the University of Melbourne, who assigns it to a unique category among national and international ranking systems, saying, “I do think social science-wise it’s so weak that you can’t take the results seriously”.

This in turn was followed by a heated exchange between Ben Sowter of QS and Marginson.

Although it is hard to disagree with Marginson’s characterisation of the QS rankings, it is strange he should consider their shortcomings to be unique.

U-Multirank and the Lords

Another sign of intensifying competition is the response toproposals for U-Multirank. This is basically a proposal, sponsored by the European Union, not for a league table in which an overall winner is declared but for a series of measures that would assess a much broader range of features, including student satisfaction and regional involvement, than rankings have offered so far.

There are obviously problems with this, especially with the reliance on data generated by universities themselves, but the disapproval of the British educational establishment has been surprising and perhaps just a little self-serving and hypocritical.

In 2011, the European Union Committee of the House of Lords took evidence from a variety of groups about various aspects of European higher education, including U-Multirank. Among the witnesses was the Russell Group of elite research intensive universities, formed after many polytechnics were upgraded to universities in 1992.

The idea was to make sure that research funding remained in the hands of those who deserved it. The group, named after the four-star Russell Hotel in a “prestigious location in London” where it first met, is not an inexpensive club: recently the Universities of Exeter, Durham and York and Queen Mary College paid £500,000 apiece to join.

The Lords also took evidence from the British Council, the Higher Education Funding Council for England, the UK and Scottish governments, the National Union of Students and Times Higher Education.

The committee’s report was generally negative about U-Multirank, stating that the Russell Group had said "ranking universities is fraught with difficulties and we have many concerns about the accuracy of any ranking”.

“It is very difficult to capture fully in numerical terms the performance of universities and their contribution to knowledge, to the world economy and to society,” the report said. “Making meaningful comparisons of universities both within, and across, national borders is a tough and complex challenge, not least because of issues relating to the robustness and comparability of data.”

Other witnesses claimed there was a lack of clarity about the proposal’s ultimate objectives, that the ranking market was too crowded, that it would confuse applicants and be “incapable of responding to rapidly changing circumstances in institutional profiles”, that it would “not allow different strengths across diverse institutions to be recognised and utilised” and that money was better spent on other things.

The committee also observed that the UK Government’s Department of Business Innovation and Skills was “not convinced that it [U-Multirank] would add value if it simply resulted in an additional European ranking system alongside the existing international ranking systems” and the minister struck a less positive tone when he told us that U-Multirank could be viewed as "an attempt by the EU Commission to fix a set of rankings in which [European universities] do better than [they] appear to do in the conventional rankings”.

Just why should the British government be so bothered about a ranking tool that might show European (presumably they mean continental here) universities doing better than in existing rankings?

Finally, the committee reported that “(w)e were interested to note that THES (sic) have recently revised their global rankings in 2010 in order to apply a different methodology and include a wider range of performance indicators (up from six to 13)”.

The committee continued: “They told us that their approach seeks to achieve more objectivity by capturing the full range of a global university's activities – research, teaching, knowledge transfer and internationalisation – and allows users to rank institutions (including 178 in Europe) against five separate criteria: teaching (the learning environment rather than quality); international outlook (staff, students and research); industry income (innovation); research (volume income and reputation); and citations (research influence).”

It is noticeable the Lords showed not the slightest concern, even if they were aware of it, about the THE rankings’ apparent discovery in 2010 that the world’s fourth most influential university for research was Alexandria University.

The complaints about U-Multirank seem insubstantial, if not actually incorrect. The committee’s report says the rankings field is overcrowded. Not really: there are only two international rankings that make even the slightest attempt to assess anything to do with teaching. The THE World University Rankings included only 178 European universities in 2011 so there is definitely a niche for a ranking that aims at including up to 500 European universities and includes a broader range of criteria.

All of the other complaints about U-Multirank, especially reliance on data collected from institutions, would apply to the THE and QS rankings, although perhaps in some cases to a somewhat lesser extent. The suggestion that U-Multirank is wasting money is ridiculous; €2 million would not even pay for four subscriptions to the Russell Group.

Debate

In the ensuing debate in the Lords there was predictable scepticism about the U-Multirank proposal, although Baroness Young of Hornsey was quite uncritical about the THE rankings, declaring that “ (w)e noted, however, that existing rankings, which depend on multiple indicators such as the Times Higher Educationworld university rankings, can make a valuable contribution to assessing the relative merits of universities around the world”.

In February, the League of European Research Universities, or LERU, which includes Oxford, Cambridge and Edinburgh, announced it would have nothing to do with the U-Multirank project.

Its secretary general said "(w)e consider U-Multirank, at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system". He went on to talk about "the lack of reliable, solid and valid data for the chosen indicators in U-Multirank”, about the comparability between countries, about the burden put upon universities to collect data and about “the lack of 'reality-checks' in the process thus far".

In May, the issue resurfaced when the UK Higher Education International Unit, which is funded by British universities and various government agencies, issued a policy statement that repeated the concerns of the Lords and LERU.

Since none of the problems with U-Multirank are in any way unique, it is difficult to avoid the conclusion that higher education in the UK is turning into a cartel and is extremely sensitive to anything that might undermine its market dominance.

And what about THE?

What is remarkable about the controversies over QS and U-Multirank is that Times Higher Education and Thomson Reuters, its data provider, have been given a free pass by the British and international higher education establishments.

Imagine what would happen if QS had informed the world that, in the academic reputation survey, its flagship indicator, the top position was jointly held by Rice University and the Moscow State Engineering Physics Institute (MEPhI)! And that QS argued this was because these institutions were highly focused, that they had achieved their positions because they had outstanding reputations in their areas of expertise and that QS saw no reason to apologise for uncovering pockets of excellence.

Yet THE has put Rice and MEPhI at the top of its flagship indicator, field- and year- normalised citations, given very high scores to Tokyo Metropolitan University and Royal Holloway London among others, and this has passed unremarked by the experts and authorities of university ranking.

For example, a recent comprehensive survey of international rankings by Andrejs Rauhvargers for the European University Association describes the results of the THE reputation survey as “arguably strange” and “surprising”, but it says nothing about the results of the citation indicator, which ought to be much more surprising.

Let us just look at how MEPhI got to be joint top university in the world for research influence, despite its lack of research in anything but physics and related fields. It did so because one of its academics was a contributor to two multi-cited reviews of particle physics. This is a flagrant case of the privileging of the citation practices of one discipline which Thomson Reuters andTHE supposedly considered to be unacceptable. The strange thing is that these anomalies could easily have been avoided by a few simple procedures which, in some cases, have been used by other ranking or rating organisations.

They could have used fractionalised counting, for example, the default option in the Leiden ranking, so that MEPhI would get 1/119th credit for its 1/119th contribution to the Review of Particle Physics for 2010. They could have excluded narrowly specialised institutions. They could have normalised for five or six subject areas, which is what Leiden University and Scimago do. They could have used several indicators for research influence drawn from the Leiden menu.

There are other things they could do that would not have had much effect, if any, on last year’s rankings, but that might pre-empt problems this year and later on. One is to stop counting self-citations, a step already taken by QS. This would have prevented Alexandria University getting into the world’s top 200 in 2010 and it might prevent a similar problem next year.

Another sensible precaution would be to count only one affiliation per author. This would prevent universities benefitting from signing up part-time faculty in strategic fields. Something else they should think about is the regional adjustment for the citations indicator, which has the effect of giving universities a boost just for being in a low-achieving county.

To suggest that two universities in different countries with the same score for citations are equally excellent – when, in fact, one of them has merely benefitted from being in a country with a poor research profile – is very misleading. It is in effect conceding, asJohn Stuart Mill said of a mediocre contemporary, that its eminence is “due to the flatness of the surrounding landscape”.

Finally, if THE and Thomson Reuters are not going to change anything else, at the very least they could call their indicator a measure of research quality instead of research influence. Why should THE and Thomson Reuters have not taken such obvious steps to avoid such implausible results?

Probably it is because of a reluctance to deviate from their InCites system, which evaluates individual researchers.

THE and Thomson Reuters may be lucky this year. There will be only two particle physics reviews to count instead of three so it is likely that some of the places with inflated citation scores will sink down a little bit.

But in 2014 and succeeding years, unless there is a change in methodology, the citations indicator could look very interesting and very embarrassing. There will be another edition of the Review of Particle Physics, with its massive citations for its 100-plus contributors, and there will be several massively cited multi-authored papers on dark matter and the Higgs Boson to skew the citations indicator.

It seems likely that the arguments about global university rankings will continue and that they will get more and more heated.

A bad idea but not really new



University teachers everywhere are subject to this sort of pressure but it is unusual for it to be stated so explicitly.




"A university put forward plans to assess academics’ performance according to the number of students receiving at least a 2:1 for their modules, Times Higher Education can reveal.
According to draft guidance notes issued by the University of Surrey - and seen by THE - academics were to be required to demonstrate a “personal contribution towards achieving excellence in assessment and feedback” during their annual appraisals.
Staff were to be judged on the “percentage of students receiving a mark of 60 per cent or above for each module taught”, according to the guidance form, issued in June 2012, which was prefaced by a foreword from Sir Christopher Snowden, Surrey’s vice-chancellor, who will be president of Universities UK from 1 August.
“The intention of this target is not to inflate grades unjustifiably but to ensure that levels of good degrees sit comfortably within subject benchmarks and against comparator institutions,” the document explained.
After “extensive negotiations” with trade unions, Surrey dropped the proposed “average target mark”, with replacement guidance instead recommending that staff show there to be “a normal distribution of marks” among students."

Serious Wonkiness



Alex Usher at HESA had a post on the recent THE Under-50 Rankings. Here is an except about the Reputation and Citations indicators.



"But there is some serious wonkiness in the statistics behind this year’s rankings which bear some scrutiny. Oddly enough, they don’t come from the reputational survey, which is the most obvious source of data wonkiness. Twenty-two percent of institutional scores in this ranking come from the reputational ranking; and yet in the THE’s reputation rankings (which uses the same data) not a single one of the universities listed here had a reputational score high enough that the THE felt comfortable releasing the data. To put this another way: the THE seemingly does not believe that the differences in institutional scores among the Under-50 crowd are actually meaningful. Hmmm.

No, the real weirdness in this year’s rankings comes in citations, the one category which should be invulnerable to institutional gaming. These scores are based on field-normalized, 5-year citation averages; the resulting institutional scores are then themselves standardized (technically, they are what are known as z-scores). By design, they just shouldn’t move that much in a single year. So what to make of the fact that the University of Warwick’s citation score jumped 31% in a single year, Nanyang Polytechnic’s by 58%, or UT Dallas’ by a frankly insane 93%? For that last one to be true, Dallas would have needed to have had 5 times as many citations in 2011 as it did in 2005. I haven’t checked or anything, but unless the whole faculty is on stims, that probably didn’t happen. So there’s something funny going on here."

Here is my comment on his post.


Your comment at University Ranking Watch and your post at your blog raise a number of interesting issues about the citations indicator in the THE-TR World University Rankings and the various spin-offs.



You point out that the scores for the citations indicator rose at an unrealistic rate between 2011 and 2012 for some of the new universities in the 100 Under 50 Rankings and ask how this could possibly reflect an equivalent rise in the number of citations.



Part of the explanation is that the scores for all indicators and nearly all universities in the WUR, and not just for the citations indicator and a few institutions, rose between 2011 and 2012. The mean overall score of the top 402 universities in 2011 was 44.3 and for the top 400 universities in 2012 it was 49.5.



The mean scores for every single indicator or group of indicators in the top 400 (402 in 2011) have also risen although not all at the same rate. Teaching rose from 37.9 to 41.7, International Outlook from 51.3 to 52.4, Industry Income from 47.1 to 50.7, Research from 36.2 to 40.8 and Citations from 57.2 to 65.2.



Notice that the scores for citations are higher than for the other indicators in 2011 and that the gap further increases in 2012.



This means that the citations indicator had a disproportionate effect on the rankings in 2011, one that became more disproportionate in 2012



It should be remembered that the scores for the indicators are z scores and therefore they measure not the absolute number of citations but the distance in standard deviations from the mean number of normalised citations of all the universities analysed. The mean is the mean not of the 200 universities listed in the top 200 universities in the printed and online rankings or the 400 included in the ipad/iphone app but the mean of the total number of universities that have asked to be ranked. That seems to have increased by a few hundred between 2011 and 2012 and will no doubt go on increasing over the next few years but probably at a steadily decreasing rate.



Most of the newcomers to the world rankings have overall scores and indicator scores that are lower than those of the universities in the top 200 or even the top 400. That means that the mean of the unprocessed scores on which the z scores are based decreased between 2011 and 2012 so that the overall and indicator scores of the elite universities increased regardless of what happened to the underlying raw data.



However, they did not increase at the same rate. The scores for the citations indication, as noted, were much higher in 2011 and in 2012 than they were for the other indicators. It is likely that this was because the difference between top 200 or 400 universities and those just below the elite is greater for citations than it is for indicators like income, publications and internationalisation. After all, most people would probably accept that internationally recognised research is a major factor in distinguishing world class universities from those that are merely good.



Another point about the citations indicator is that after the score for field and year normalised citations for each university is calculated it is adjusted according to a “regional modification”. This means that the score, after normalisation for year and field, is divided by the square root of the average for the country in which the university is located. So if University A has a score of 3.0 citations per paper and the average for the country is 3.0 then the score will be divided by 1.73, the square of 3, and the result is 1.73. If a university in country B has the same score of 3.0 citations per paper but the overall average is just 1.0 citation per paper the final score will be 3.0 divided by the square root of 1, which is 1, and the result is 3.



University B therefore gets a much higher final score for citations even though the number of citations per paper is exactly the same as University A’s . The reason for the apparently higher score is simply that the two universities are being compared to all the other universities in their country. The lower the score for universities in general then the higher the regional modification for specific universities. The citations indicator is not just measuring the number of citations produced by universities but also in effect the difference between the bulk of a country’s universities and the elite that make into the top 200 or 400.



It is possible then that a university might be helped into the top 200 or 400 by having a high score for citations that resulted from being better than other universities in a particular country that were performing badly.



It is also possible that if a country’s research performance took a dive, perhaps because of budget cuts, with the overall number of citations per paper declining, this would lead to an improvement in the score for citations of a university that managed to remain above the national average.



It is quite likely that -- assuming the methodology remains unchanged -- if countries like Italy, Portugal or Greece experience a fall in research output as a result of economic crises, their top universities will get a boost for citations because they are benchmarked against a lower national average.



Looking at the specific places mentioned, it should be noted once again that Thomson Reuters do not simply count the number of citations per paper but compare them with the mean citations for papers in particular fields published in particular years and cited in particular years.



Thus a paper in applied mathematics published in a journal in 2007 and cited in 2007, 2008, 2009, 2010, 2011 and 2012 will be compared to all papers in applied maths published in 2007 and cited in those years.



If it is usual for a paper in a specific field to receive few citations in the year of publication or the year after then even a moderate amount of citations can have a disproportionate effect on the citations score.



It is very likely that Warwick’s increased score for citations in 2012 had a lot to do with participation in a number of large scale astrophysical projects that involved many institutions and produced a larger than average number of citations in the years after publication. In June 2009, for example, the Astrophysical Journal Supplement Series published ‘The seventh data release of the Sloan Digital Sky Survey’ with contributions from 102 institutions, including Warwick. In 2009 it received 45 citations. The average for the journal was 13. The average for the field is known to Thomson Reuters but it is unlikely that anyone else has the technical capability to work it out. In 2010 the paper was cited 262 times: the average for the journal was 22. In 2011 it was cited 392 times: the average for the journal was 19 times.



This and similar publications have contributed to an improved performance for Warwick, one that was enhanced by the relatively modest number of total publications by which the normalised citations were divided.



With regard to Nanyang Technological University, it seems that a significant role was played by a few highly cited publications in Chemical Reviews in 2009 and in Nature in 2009 and 2010.



As for the University of Texas at Dallas, my suspicion was that publications by faculty at the University of Texas Southwestern Medical Center had been included, a claim that had been made about the QS rankings a few years ago. Thomson Reuters have, however, denied this and say they have observed unusual behaviour by UT Dallas which they interpret as an improvement in the way that affiliations are recorded. I am not sure exactly what this means but assume that the improvement in the citations score is an artefact of changes in the way data is recorded rather than any change in the number or quality of citations.



There will almost certainly be more of this in the 2013 and 2014 rankings."