Retraction: The “Other Face” of Research Collaboration?

: The last two decades have witnessed the rising prevalence of both co-publishing and retraction. Focusing on research collaboration, this paper utilizes a unique dataset to investigate factors contributing to retraction probability and elapsed time between publication and retraction. Data analysis reveals that the majority of retracted papers are multi-authored and that repeat offenders are collaboration prone. Yet, all things being equal, collaboration, in and of itself, does not increase the likelihood of producing flawed or fraudulent research, at least in the form of retraction. That holds for all retractions and also retractions due to falsification, fabrication, and plagiarism (FFP). The research also finds that publications with authors from elite universities are less likely to be retracted, which is particularly true for retractions due to FFP. China stands out with the fastest retracting speed compared to other countries. Possible explanations, limitations, and policy implications are also discussed.


Introduction
public resources 1 , inhibit the ability of health care professionals to serve their patients well, but also tarnish the trust of the public toward science and scientists (Azoulay et al. 2012;Lacetera and Zirulia 2011;Stern et al. 2014;Zuckerman 1988). Unsurprisingly, over the latest decade, a growing number of studies have investigated the rising phenomenon of retraction, giving rise to four major research streams.
The first line of research describes this rising phenomenon. Studies consistently show that both the number and the growth rate of retracted articles have increased sharply (Fanelli et al. 2015;Fang et al. 2012;Nath et al. 2006). This high incidence of retractions not only occurs in biomedical articles indexed in PubMed but also spans other disciplines (Fanelli 2013;Zhang and Grieneisen 2013). In their efforts to understand the rising prevalence of retractions, Grant Steen and colleagues (2013) pointed out that the lower barriers to publications and retractions contribute to the sharp increase of retractions, and Daniele Fanelli (2013) noted that the greater institutional scrutiny facilitated by the rapid development of information and communications technology and scholarly communication plays important roles in identifying fraudulent research or false science. In this sense, the increasing number of retractions is a good sign of science's self-correction and oversight mechanism (Brainard 2018). Extant studies also found that retraction is particularly prevalent for research produced in countries such as the USA, Germany, Japan, China, the UK, and India. 2 China leads in duplicate publications, followed by the USA and India (Grieneise and Zhang 2012). Daniele Fanelli and colleagues (2015) maintained that scientific misconduct is more likely to occur in countries that highly incentivize publications while lacking clearly defined and operational research integrity policies. Shunhai Qu and Viroj Wiwanitkit (2015) argued that such country differences lie in the size of the researcher pool and the research output volume as well as the perception of misconduct.
The second line of research into retraction categorizes the reasons behind it. Sara Nath and colleagues (2006) found that more than two-thirds of the biomedical retractions between 1982 and 2002 resulted from unintentional mistakes such as inappropriate sampling procedures, coding mistakes and the others. Steen (2011b) also reported that about three-quarters of the retracted PubMed papers between 2000 and 2010 are due to errors or undisclosed reasons. Different findings also emerge. By combining information from Retraction Watch, the U.S. Office of Research Integrity (ORI), and other public sources, Ferric Fang and colleagues (2012) reported that only one-fifth of the retractions resulted from errors, whereas over two-thirds of the papers were removed 4 because of scientific misconduct manifest in falsification, fabrication, and plagiarism (FFP). 3 The third strand of investigation focuses on factors influencing the likelihood of retraction, and elapsed time between publication and retraction. Research has consistently reported that the number of retractions is correlated significantly with the impact factors of the journals in which the retracted papers were published.
Nikolaos Trikalinos and colleagues (2008) observed that retractions resulting from falsification often take longer, especially when senior researchers are involved. Steen (2011b) also argued that a journal's impact factor is often significantly higher for fraudulently produced papers, as their authors might target journals with a high impact factor. The same findings were reported by Ryoji Noyori and Joe Richmond (2013): the higher the journal's impact factor, the more retractions there were. They speculated further that the incentives of publishing in high-impact journals, such as career development and receipt of research grants, played an important role in stimulating fraudulent authors.
Recently, studies have also appeared treating the event of retraction as an independent variable and examining its impact on individual development. For example, Laura Bonetta (2006) reported the negative influence of research misconduct on coauthors' reputations and career development. Ginger Jin and colleagues (2013) documented a heterogeneous impact of retraction penalties on eminent and less-famous collaborators.
The reverse Matthew Effect was observed with less citation penalty on more established researchers. 4 In a similar vein Philippe Mongeon and Vincent Lariviè re (2016) noted that retraction does negatively impact authors' career development but with primary authors, namely first authors and reprint authors, suffering more.
Along this line of investigation, some studies examined the impact of retraction on the advancement of research fronts. Pierre Azoulay and colleagues (2012) investigated the extent to which retraction event impacts the rate and the direction of science development. Their research revealed that both funding and knowledge production activities decreased in the areas of false science. This negative spillover effect of retraction is also supported by Susan Feng Lu and colleagues (2013). They found that the citation penalty for non-self-reporting retraction goes 3 Different samples and coding schemes are the two main reasons accounting for the different results between Steen (2011b) and Fang and colleagues (2012). First, while both used PubMed as the data source, Steen (2011b) covered about 800 retracted articles between 2000 and 2010, while Fang and colleagues (2012) looked at some 2,000 retracted articles between 1973 and May 2012. Second, Steen (2011b) collapsed "undisclosed reasons" for retraction under the category of errors based on retraction notices, while Fang et al. (2012) improved the research by incorporating information from ORI. It is interesting to note that Steen also is a coauthor of the Fang and colleagues (2012) paper. 4 According to Robert Merton (1968), the Matthew Effect refers to the cumulative advantages of renowned scientists who tend to garner more credit than those unknown with equal contributions. In the research of Jin and her colleagues (2013), the term "reverse Matthew Effect" means eminent scholars still win with less credits taken away compared to the less eminent ones who were also punished for retracted articles. beyond the retracted paper itself.
In spite of the growing number of studies on retraction, one research gap is a possible connection between co-publishing and retraction. In a departure from past scholarship, this paper studies the connection between co-publishing and retractions and tries to further understand the factors affecting/influencing the likelihood of retraction and the time it takes for the false science to be removed from the literature. Of particular interest is whether there is a relationship between collaboration and the likelihood of retraction as well as the speed of retraction.

Hypotheses for testing
As an old proverb goes, too many cooks spoil the broth. In the social psychology literature, diffusion of responsibility, also referred to as bystander effect, suggests that an individual is less likely to take responsibility for action or more likely to be idle in the presence of others, as the individual assumes that others either are responsible for taking action or have already done so (Darley and Latane 1968). This bystander inaction can occur in real-life academia. In the case of joint publication, for example, it is reasonable to assume that each coauthor behaves as though the responsibility for the credibility and quality of the coauthored work is diffused and thus does not necessarily take care of the validity of the collective knowledge product. This underperformance due to diffusion of responsibility, combined with the costs of collaboration such as knowledge fragmentation and coordination failure (Youtie and Bozeman 2014), increases the likelihood of producing flawed scientific findings, which, in return, leads to a higher probability of retraction. The first hypothesis is

H1: Research collaboration increases the probability of retraction holding other factors constant.
Retraction is a self-correction mechanism of the science community. When a paper is retracted, its findings are invalidated. The sooner a flawed or fraudulent paper is retracted, regardless of the reason, the less of a negative impact it will have on future research (Fanelli 2013;Gasparyan et al. 2014). However, existing research has paid very little attention to the factors affecting the time between publication and retraction, with few notable exceptions. Trikalinos and colleagues (2008), for example, reported an average two-year retraction time. Jeffrey Furman and colleagues (2012) argued that no observable factors affected the time to retraction except for publication year, whose statistically significant and negative regression coefficient provided strong evidence in support of the trend of shortened times of detecting flawed findings.
Joint research on average receives more citations and scholarly attention (Glä nzel and Schubert 2004; Van Raan 1998) and thus greater scrutiny and a higher possibility of being detected more quickly for its shaky or fraudulently produced findings . Thus the second hypothesis is

H2: Research collaboration decreases the elapsed time between publication and retraction.
The status of authors or their affiliated institutions also can influence the likelihood of retraction . Prominent scientists are less likely to be caught for misconduct than average scientists (Lacetera and Zirulia 2011). One explanation is scholars from elite universities are care more about their academic reputations and those of their home institutions. The anticipated cost of attempting fraud is considerable for prominent scholars who have established their professional reputations (Fanelli et al. 2015;Lacetera and Zirulia 2011). From the perspective of organizational studies, John Walsh and his colleagues (2019) argued that university norms, culture and other unobservable features can also influence researchers' attitude and behaviors. Additionally, it takes courage to challenge the findings or even the integrity of researchers from elite universities who enjoy more epistemic authority and often evaluate and make decisions on other researchers' funding applications and promotions (Hao 2009). Thus, the third hypothesis is H3: Research collaboration involving researchers from elite universities is less likely to be retracted.
In the same vein, an author's or institution's status in the academic hierarchy also influences the time to retraction. For example, Trikalinos and his colleagues (2008) found that fraudulent or flawed articles by senior researchers take a longer time to be retracted. 5 Nicola Lacetera and Lorenzo Zirulia (2011) posited that questioning the work of established scholars could be costly for junior researchers in terms of their future publishing opportunities and career advancement. This leads to the fourth hypothesis: H4: It takes longer for a flawed paper involving collaboration with scholars from elite universities to be retracted.

Data and Method
A unique dataset was constructed to address the research questions and test these hypotheses. The primary source is a set of retracted papers retrieved from the core sets of Web of Science (WoS), 6 which indexes about 11,600 peer-reviewed journals spanning a wide spectrum of disciplines. A composite Boolean query was used to 5 It needs to point out that there is no agreed upon definition on elite scholars, star scientists, senior scholars, or established researchers in extant literature. These terms are often used interchangeably. According to Trikalinos et al (2008) senior scholars refer to professors, lab directors, experienced investigators or researchers who had more than five-year record of publishing original articles. In this research, a dummy variable of Top 100 universities is used as a proxy indicator of an institution's research status (also see Walsh et al. 2019). For more details please refer to the section of variables.. search for retraction notices in January 2014 using "retract*" in the fields of title, key words, and abstract. The search was confined to the document type of corrections for the period from 1978 to 2013. This returned 2,648 retrieved hits. The full texts of retraction notices were downloaded with each linking to a corresponding retracted article. Two independent teams manually read all retraction notices. After several rounds of independent verification, cross-checking, and removing duplicates and irrelevant records, eventually 2,087 unique retracted papers published between 1978 and 2013 were identified. 7 The full bibliographical records of the retracted articles indexed in WoS were downloaded. Based on the nearest-neighbor-matching principle proposed by Jeffrey Furman and colleagues (2012), each retracted paper was initially matched with two control articles immediately before and after the retracted one in the same issue of the same journal. If neither qualifies (for example, its document type is conference abstract, letter, correction, or editorial, among others), the retracted paper was then matched with its next nearest neighbor. The farthest neighborhood distance is three, i.e., three papers ahead of or behind the retracted one. If a retracted article was the first or last one in an issue, only one matched article will be identified. In this way, and with several rounds of data cleaning and standardization, 3970 control records were finally identified with a 96.6% matching rate. The two datasets were then imported into the text mining software VantagePoint. The final core dataset for analysis consists of 6,057 records with 2,087 retracted articles and 3,970 associated control matched articles. Journal impact factors and global rankings of the institutional affiliations of the authors of the retracted and control articles were retrieved from the 2012 ISI Journal Citation Reports (JCR) 8 and the 2014 Academic Ranking of World Universities (ARWU) respectively and integrated into the publication dataset. 9

Retraction reasons
Retraction occurs for a variety of reasons, ranging from fabrication, falsification, and plagiarism (FFP) to duplicate publication, lack of Institutional Review Board (IRB) approval, scientific errors, author disputes, and the like (Fanelli et al. 2015;Fang et al. 2012;Furman et al. 2012;Steen et al. 2013;Wager and Williams 2011;Williams and Wager 2013). Independent coding of the reasons for retractions in the dataset was carried out from June to July in 2017 by two research teams based on agreed-upon procedures. Inter-coder reliability had an agreement rate of 92% on the first round. For those 177 cases where there was disagreement in the first round, a third person discussed each case with the coders to reach consensus in the end. Any discrepancy was checked by the third researcher. The coding scheme of 13 types of retractions, the number of cases, and illustrating examples are detailed in Appendix 1.
Following former practices Gasparyan et al. 2014), we further aggregate reasons for retraction into four categories: misconduct and suspected misconduct (types 1 through 10), scientific errors (type 11), publisher errors (type 12), and unknown (type 13). [ Figure 1 Inserted here] Table 1 lists the top ten "repeat offenders" who have retracted at least 20 articles in our dataset.
Together they comprised 226 unique retracted articles, roughly 11% of our sample. Nine of the ten repeat offenders are from the top five countries with the largest amount of retracted papers: 5 from Japan, 2 from Germany, and 2 from the U.S.. Seven are in the research domain of the life sciences, two are in physics, and one is in psychology. The top ten recidivists are all male. This finding echoes that of Ferric Fang and colleagues (2013) that males are overrepresented in misconduct in the life sciences. Ten repeat retractors are all collaboration prone. Three clusters of retracted papers are highlighted by different colors in Table 1. The tight interconnection among repeat retractors itself is worthy of further investigation. Some are leading offenders (such as Yoshitaka Fujii and Jan Hendrik Schön) verified by respective investigation committees, while some are only followers (such as Christian Kloc) or even innocent collaborators as Mongeon and Lariviè re (2016) suggested. 10 [ Table 1 inserted here]

Temporal and geographical distribution
The data indicate that between 1978 and 2013, both the number and growth rate of retractions increased over time. This is consistent with the findings of previous studies despite different examination periods and publication coverage. As illustrated in Figure 2, the rapid growth of retracted papers is mainly driven by misconduct. Compared to the steady growth of retractions for errors, retracted articles resulting from misconduct have been growing at a significant rate.
[ Figure 2 Inserted here] The data also reveal that the distribution of retracted research is highly skewed nationally. The top five countries in terms of the number of retracted articles are the USA (622) Based on the implication in Figure 3 that countries publishing a smaller number of papers may have a larger percentage of retractions, further analysis of all countries which have been involved in at least ten retractions in this dataset revealed that, surprisingly, Egypt turns out to be the country with the largest share of retractions: for every ten thousand (10,000) Egyptian publications, about 3.04 papers were retracted over the examined period. It is followed by Iran (2.95‱ ), South Korea (2.19‱ ), China (2.16‱ ), and India (1.93‱ ) in descending order. The full list is shown in Appendix 2.

Journal impact factors
The subject-specific quartile impact factors of each journal were calculated based on the 2012 ISI JCR.
If a journal is assigned to different subject categories or disciplines 12 , its impact factors at both the highest and lowest quartiles were calculated. Consistent with previous findings noted above, our analysis indicates that retractions appeared more frequently in journals with higher impact factors. This conclusion holds taking into multidisciplinary journals with different subject-specific JIF. As illustrated in Figure 4, when taking the lowest quartiles for all journals, 46% of the retracted articles were published in Quartile-1 journals (high-impact-factor journals) while only 11% were in Quartile-4 journals (low-impact-factor journals). Using journals' highest impact-factor quartiles, the retractions were 55% in Quartile-1 but 7% in Quartile-4.

Disciplinary distribution
With regard to research domains, similar to the findings of Susan Feng Lu and colleagues (2013) and Fanelli (2013), which also analyzed WoS retracted articles, our data show that retraction was more common in the biomedical and life sciences. As illustrated in the inner circle of Figure 5, over 60% of the retracted articles were in the life sciences and biomedicine; by sharp contrast, only 0.1 % of arts and humanities papers and 5.1% of social sciences papers were retracted. The highly uneven distribution of retractions across disciplines may reflect the possible lower incidence of false science, or lower rates of detection of problematic research in the arts and humanities and social sciences, where replicability is harder to implement (Fox and Braxton 1994). The outer circle of Figure 5 illustrates the distribution of retracted articles against all publications in the five research areas between 1990 and 2013.
Apparently, the proportion of retracted life sciences and biomedicine articles is one-third more than their share of WoS articles, which only contributed 42.5% of the indexed publications. 13 The leading scientific nations such as the USA, China, Japan, Germany, and India all witnessed higher proportions of retractions in life sciences and biomedicine relative to their shares of papers in this research domain. The enormous consequences and economic potential of the research in these fields as well as the fierce competition for positions, promotions, funding, and especially priority of discovery and peer recognition might have led life scientists to rush to publish. On the other hand, once published such publications are more likely to be subject to stricter peer scrutiny for the same reasons. Such attention, facilitated by an expanded scientific community and information communications technologies, may mean that internal errors are more likely to be caught (Fanelli 2013;Steen et al. 2013).

Collaboration and retraction
Of the 2,087 retracted papers, 93% had two or more authors, with 51% involving inter-institutional collaboration and 19% collaboration across national borders. Table 2 compares the collaboration size (number of authors, number of affiliations, and number of countries) and the time to retraction for retracted papers vs.
control groups. As shown, retracted articles, in general, have smaller collaboration dimensions of author, affiliation, and country. Next the paper examined whether such difference is statistically significant controlling for other factors.

Statistical analysis
Variables Dependent variable. The unit of analysis is the article. The two key focal variables of the study are 1) retraction of flawed research and 2) retraction time lag. The first dependent variable, retraction, is a dichotomous variable: an article is coded 1 if it is in the retraction group and 0 if in the control group. The second dependent variable is time to retraction, a continuous variable measured by the natural log of the elapsed months between an article's publication and its retraction. 14 Explanatory and control variables. The major independent variable of the study is research collaboration. It is measured by the following three indicators: The institutional factor was measured by a dummy variable indicating whether any author is affiliated with elite universities, i.e. the Top 100 universities reported by the 2014 Academic Ranking of World Universities (ARWU). 15 Additionally, year of publication, research domain, and journal's impact factor, a set of primary country dummy variables was included in the regression models to control for the research culture factor.

Regression Results
The logistic regressions were adopted to test Hypotheses 1 and 3 while using the natural logarithm of the time to retraction in months to test Hypotheses 2 and 4. The primary results are presented in Tables 3 and 4 (in both tables, Panels 1 and 3 are for the global dataset while Panels 2 and 4 focus on the FFP sub-dataset). 16

[Tables 3 & 4 Inserted here]
Several findings are noteworthy. First, the evidence does not support hypothesis 1 that research collaboration increases the chance of producing false science. The odds ratios of collaboration size are less than 1 and statistically significant for the number of authors, indicating that collaboration size is negatively associated with the retraction event holding other factors constant. In other words, all else being equal, an increase in the number of collaborators does not increase the likelihood of retraction. This holds for all types of retractions and also for retractions due to FFP as well. One possible explanation is that cross-pollination of different minds and validation of findings ensure a higher standard of quality control and thus a more robust research that is less likely to be retracted. In other words, a larger collaboration size hints more internal auditing among research collaborators, and therefore a higher chance of identifying fraud or significant errors prior to publication with scrupulous checking and knowledge validation. Meanwhile it takes longer to retract a flawed paper involving more authors. As also shown in Table 4, collaboration sizes are all positively correlated with retraction time. One speculation is that it may take longer and involve more effort for a journal to investigate a multi-authored paper to make the final decision of retraction; but such an impact is only statistically significant for the number of authors. Hypothesis 2 is partially supported. 15 Considering the possible stability of ARWU ranking, the rankings of 2014's Top 100 universities in the year of 2003, the earliest year of ARWU rankings, were also downloaded. The comparison reveals that eighty-two of the Top 100 universities listed in 2014 were also in the Top 100 ranking of the year of 2003. 16 Taking the possible nonlinear effect of collaboration into consideration, we also added squared terms of all three indicators in the regressions. Other robustness tests we conducted include changing any author affiliated with a Top-100 university to a primary author (i.e., first author and reprint author), changing the numerical measurement of collaboration to a set of three dummy variables, excluding retractions due to publisher errors and unknown reasons and their matched records in the regression, and excluding articles coauthored by repeat offenders. Overall, the results remain robust and support the main finding.
Second, the data suggest that publications with authors from elite universities were less likely to be retracted. Globally, only slightly over one in five (22.6% to be exact) retracted articles involved at least one author from a Top-100 university, and 14.9% of the retracted articles had a first or reprint author from an elite university. As shown in Table 3, the odds-ratio for authors affiliated with a Top-100 university to withdraw a paper is 0.77. That means that, holding other factors constant, the odds of an article with at least one author from a Top-100 university being retracted were 23% lower compared with one without such an author, thus suggesting a stronger tendency among global elite universities to inhibit retractions. This is particularly true for retraction due to FFP. The same pattern also holds when the collaboration size is measured by the number of institutions and the number of countries involved in collaboration. This empirical finding is consistent with the Game-Theory related finding that elite authors are less likely to be caught in misconduct even if the probability of publishing a fraudulent paper is higher (Lacetera and Zirulia 2011). It may also suggest that collaborating with prominent scholars has a smaller chance of producing floppy work leading to retraction. Meanwhile, as indicated by the negative signs of regression coefficients in Table 4, if a paper coauthored by an elite scientist had to be retracted, it was retracted more quickly. 17 One possible reason for this phenomenon is that collaboration involving leading scientists is likely to produce findings at the frontier, which are likely to be scrutinized and replicated by a greater number of follow-ups. Therefore, flaw or fraudulence, if any, is detected more quickly by the larger scientific community. Moreover, elite scientists may take more proactive steps and retract papers immediately once the research is suspected of falsification.
The third and final notable finding concerns collaboration involving scientists from the countries with the largest numbers of retractions. As demonstrated in Panel 1 of Table 3, among the top five countries with the largest number of retractions, ceteris paribus, China and India stand out with the largest likelihoods of retraction of research involving their scientists as primary contributors. 18 This finding echoes Adam Segal's (2011) statement that the emerging scientific powers whose goals of intimately linking scientific research to the national pride of catching up with and surpassing the incumbents sometimes have ended up with unintended and mostly undesirable consequences. Panels 3 and 4 of Table 4 show that among the five countries, only articles with primary authors from China demonstrate negative associations with time to retraction. In other words, China is the country whose scientists retracted papers most quickly.

Discussion and Conclusions
This paper investigated the characteristics of retracted papers indexed in the Web of Science. Yet, when controlling for compounding factors such as research domain, publication year, journal visibility, research environment and culture, our findings provide suggestive evidence that working together helps encourage responsible behavior among researchers; i.e. retraction may be not the other face of research collaboration. On average, collaborated research is less likely to be "false science". And that such "false science," once involved with researchers from elite universities, is retracted more quickly. Of course, the size and types of collaboration themselves could be endogenous and driven by the factors mentioned above as well as other factors such as easy access to facilities and resources, validation of knowledge, the inherently interdisciplinary and exceptionally exploratory nature of the research at the frontier, or increased efficiency of knowledge production due to division of labor (Beaver and Rosen 1979;Uzzi and Spiro 2005).
Our research has some limitations. To begin with, given that this empirical study is based upon secondary data, we can speculate only within our knowledge of research governance practice and extant Finally, the caveats of statistical tests and misinterpretation of results have been discussed in previous studies (for more discussion please see Schneider 2011Schneider , 2015. We need to be cautious and should not over-interpret the results based on statistical results alone.
Bearing those limitations in mind, we also draw from our findings some policy implications for research collaboration and evaluation practices. First, scientific misconduct is a growing problem that has bedeviled the research community in recent decades. Globally, there is a compelling government interest in promoting responsible research behavior and invalidate "false science" as soon as possible. Our study suggests that jointly published research with contributions of primary authors from top universities is less likely to be retracted; but once retracted the elapsed time between publication and retraction is rather short. This finding offers empirical support for policy proposals that endorse research collaboration, especially that involving scientists at top universities.
Second, our findings are relevant to current performance evaluation policies in some countries that highly de-incentivize collaboration (Yan et al. 2016;Zhou et al. 2009). China is now among the world's largest scientific knowledge producer (Tang 2019). Yet, in many Chinese universities only the status of the first author or reprint author is counted toward faculty tenure and promotion . For example, the written criteria for tenure and promotion at some institutions, such as Shanghai Jiao Tong University, clearly state that to be eligible for promotion to a higher academic rank a faculty member must publish two first-or reprint-authored papers in WoS-indexed journals. This means that any paper in which an academic is listed as a second or other author will be highly discounted if not completely excluded. Therefore, academics are understandably reluctant to collaborate if they are not listed as primary authors, especially with competitors within the same institution. This has suffocated internal collaboration . 19 If, however, Chinese universities change policies to give credit to all authorships toward tenure or to count all authorships reasonably, this would stimulate collaboration, although it also may invite free ridership. Therefore, some universities, such as the Shanghai University of Finance and Economics, have adopted sophisticated fractional counting formulas for credit sharing to reduce the existence of possible ghost authors. Of course, collaborators need to understand that the privilege of authorship comes with not only credit but also responsibility.
Third, our empirical analysis demonstrates that collaborative publications with Chinese or Indian scientists as primary authors are far more likely to be retracted than those led by scientists from other major scientific countries. This is consistent with the general concern about developing and emerging scientific countries such as China and India that still lack intellectual capital but strive to seek their seats at the league table of global academia (Cao et al. 2013;Liu et al. 2015;. Therefore, the Chinese and Indian governments should have incentives to tackle the frequent and rising incidence of problematic research involving their scientists, as such research could damage the reputations of not only their scientists but also their countries (Kornfeld and Titus 2016). Scientists from these countries need to make extra effort to maintain the integrity of their work, because problematic research, if not stopped, will only discourage collaboration and delay the process through which these countries pursue scientific excellence.