acm-header
Sign In

Communications of the ACM

Economic and Business Dimensions

The Gamification of Academia


hands touching a chess piece and a symbol of three circles, illustration

Credit: Andrew Krasovitckii

In academia, the ethos is supposed to be about the unattached, relentless search for truth above all else, especially when the truth is uncomfortable. Truth or rather 'Veritas' is Harvard's motto. We idealize the martyrs of academics in history, such as Galileo Galilei,4 who discovered the scientific evidence for the Copernican theory of planetary motion and spent the rest of his life under house arrest by his inquisitors for challenging the doctrine of his time. We know how important academic integrity is, yet many are willing to cheat the system when the academic metrics are gamified and the game's mechanics are understood by the players. Gamification tactics in systems that bridge real-world work with online tools can distort the behaviors of people in any ecosystem. Academia, in particular, is susceptible to this phenomenon. Is this due to the nature of the industry, or is it simply an artifact of human nature at scale?

Yashihiro Sato, who studied the cross-section of nutrition and bone health, is considered by many in academia to be the biggest academic fraudster in scientific history. He fudged the data in myriad clinical trials and wrote papers subsequently published in peer-reviewed, high-impact journals. Some of his work made it into treatment guidelines that impacted real human lives. His falsified data was also used to support the rationale for new clinical studies. It is a sad story in academia but not an isolated one. In a meta-analysis of the problem, Daniele Fanelli found that 1.97% of academics admit to fabricating, falsifying, or modifying research data.2 Back at Harvard, a professor specializing in honesty has had her papers retracted for dishonesty.5 We have experienced this phenomenon of academics gaming systems firsthand.

By 1999, the academic machine had become so cumbersome that it took years to get academic work published through traditional journals. There was and is incredible demand to access new and available knowledge faster. This was a good problem for the Social Science Research Network (SSRN) to solve. Our team began working on developing the software to support SSRN, together with Michael Jensen and a team of editors in Rochester, NY, as the Web was coming of age.

As technologists, we faced the dilemma of maturing SSRN's foundational technology systems fast enough to keep up with the seemingly insatiable academic demand for knowledge that SSRN provided. We had a huge and growing database that included research papers from most of the world's major universities, and the attention of most of the academics in the social science domains. By 2013, some considered SSRN to be the world's largest source of freely available academic content.6 We had figured out how to capture the attention of academics, and we were changing their behaviors in the wild.

Focused on maintaining SSRN's position in the academic dominance hierarchy, we experimented with different tactics to drive more community engagement. We were looking for unique ways to visualize the data and share it in ways that were meaningful to academics. One of our early experiments was with our "Top Ten" email distribution. It began as a simple experiment to generate views on content and drive more downloads of academic papers. We sent email to the authors in the system whose papers were the top ten downloaded papers in an academic domain for the month. The impact was immediate, and the behavioral change significant. There was something magical about the authors, who have spent months and in some cases years doing their research, seeing their names in the rankings. We saw authors log in over the following days and upload all of their papers. Many of SSRN's academics stopped printing and handing out copies of their papers. They started emailing links to their papers on SSRN to concentrate their download counts. Eventually, we were able to build sophisticated technology to extract, refine, and link the reference list from each academic paper to the research papers that came before them. Soon thereafter, our teams created a set of metrics around the "Eigenfactor" score, which was an attempt to objectively calculate academic value at the article level.

This fed the SSRN communities' desire for more engagement, content, and value delivery to the academy. SSRN was a start-up operating on extremely thin margins. In our strategy meetings, Jensen used to say "We need to give away as much as possible … we'll make it up in volume." He was convinced that we needed to build the community as fast and as comprehensively as possible. SSRN was highly incentivized to grow quickly because we needed to fend off the numerous competitors that had spawned to challenge our position.

We started experimenting like Gregor Mendel with his peas once we figured out the power of gamification. We combined a variety of metrics and dissemination approaches to determine what would impact uploads, downloads, and overall engagement. We were ranking papers, authors, universities, departments within universities, and organizations within countries. We started ranking within academic networks, classifications, and even sub-classifications to try to give everyone the opportunity to see how they compared to their competitors and the overall market. It worked to the point where high-profile news analysts were using SSRN rankings to rank universities in different domains.6 We saw many instances where academics started to place their SSRN rankings and download counts on their CVs to help them boost their market value and attention when moving between universities. Researchers often asked for "verification" of their metrics to include in their tenure packages. Our metrics started having real-world implications for people's careers. We created a set of metrics and rankings of articles, authors, journals, and universities that brought more attention and drove more engagement.

With all the increasing attention and incentives associated with SSRN rankings, we started to see strange patterns in our download data. Some papers were quickly ratcheting up to the top of the ranking, causing the team concern. Jensen remarked, on more than one occasion, that some awful papers had achieved top ranking—it made no sense. This awareness began a detailed investigation into what was happening. We discovered several instances of nefarious activities within the community. We attacked this head-on by creating our own "Fraud Index," which looked at several data points including the source of downloads. It was basically a filtered ratio of geolocation and timing to the downloads. It was tricky at the time for a variety of reasons, but we figured out the most likely bad actors. Once we learned how to catch them, we could enforce log-in and other actions to track their activities.

The "publish or perish" mindset continues to plague academia. This mindset is the basis for tenure, promotion, and a powerful, underlying incentive for recognition and status. It causes many to make decisions and behave in ways that compromise their personal integrity and, in some cases, have the potential to jeopardize their careers. The Self-Determination Theory1 of human intrinsic motivation describes the need for autonomy, competence, and relatedness in a way that partially explains this phenomenon. Academia has focused on academic dominance hierarchies and various displays of status. The online world, without controls, appears to be a place where you can exert autonomy and control the world's perception of your competence, helping you gain an advantage on the academic competition. The rankings are proxies for academic relevance and can play a big role in the academic attention economy. It is a powerful conflation of extrinsically motivating forces.

These forces motivated a small but non-trivial number of academics to game the rankings. Some of them were so desperate to control the market perception of their work, that they were willing to commit fraud by repeatedly attempting to game the metrics. In several cases, the more hurdles we put in place, the harder they tried to find a way around them. The tactics varied, and cannot be fully disclosed here, but they included having graduate students continuously download their papers, writing software scripts to inflate their metrics, and inappropriately "promoting" their paper downloads.

Is this phenomenon of cheating the metrics in academia an artifact of people being attracted to the field in search of status in the dominance hierarchies of human intelligentsia? Or is it an artifact of normal human behavior when gamification tactics are inserted into people's lives?

Back to Top

References

1. Deci, E. and Ryan, R. The Handbook of Self Determination Theory. (2004); https://bit.ly/467vPMS

2. Fanelli, D. How Many Scientists Fabricate and Falsify Research? (2009); https://bit.ly/3PotlTx

3. Kuppershmidt, K. Researcher at the center of an epic fraud remains an enigma to those who exposed him. Science.org (2018); https://bit.ly/3EJ4jcP

4. Reston, J. Galileo: A Life. (2005); https://bit.ly/44Y2KSQ

5. Stern, J. The Harvard expert on dishonesty who is accused of lying. The Atlantic (July 7, 2023).

6. World Ranking of Academic Repositories, Ranking Web of Repositories. Cybermetrics Lab. (Jan. 2013).

Back to Top

Authors

Sean Flaherty (sflaherty@itx.com) is managing partner, ITX Corporation, and Executive Professor, William E. Simon School of Business, University of Rochester, Rochester, NY, USA.

Gregory Gordon (gregg@ssrn.com) is managing director, Knowledge Lifecycle and SSRN, Elsevier, London, England U.K.


Copyright held by owner(s)/author(s).
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: