We investigated the extent to which variances in research misbehavior can be explained by individual, climate and publication factors. Overall, individual, climate and publication factors combined explain 34% of variance in perceived frequency of research misbehavior and 18% in perceived impact of research misbehavior. The cluster accounting for the greatest percentage of explained variance is the research climate, 22 and 14% in perceived frequency and perceived impact of research misbehavior, respectively. Publication pressure is the second greatest explanatory variable, accounting for 16% of variance in perceived frequency and 12% of variance in perceived impact of research misbehavior. Individual factors are the smallest cluster, explaining 7% of variance in perceived frequency and 1% in perceived impact.
We found academic rank to play the greatest role within the cluster of individual factors. Previous research coined explanations for the association between academic rank and research misbehavior including the idea that junior researchers are less familiar with responsible research practices , or, when under pressure to perform, they would potentially compromise their ethics . However, our results indicate that senior researchers observed significantly more research misbehavior. Hence, perhaps junior researchers are more honest in their self-reporting but when asked about the behavior of others, senior researchers are equally critical of their colleagues.
We found no effect of gender and in fact the influence of individual variables (such as gender) for research misbehavior has received criticism. For example, Kaatz, Vogelman & Carnes  pointed out that males being overrepresented among those found guilty of misconduct and evidence from other areas found men more likely to commit fraud, are insufficient to conclude that male researchers would be more likely to engage in research misconduct. Besides, Dalton & Ortegren  found that the consistent finding that women respond more ethically than men was greatly reduced when controlling for social desirability. The authors note that this does not indicate males and females to respond equally ethical, but simply that the differences in ethical behavior may be smaller than initially assumed.
We found the cluster of climate factors to have the greatest share in explaining research misbehavior, which is similar to Crain and colleagues  who found that especially the subscale Integrity Inhibitors subscale (a scale that measures the degree to which integrity inhibiting factors are present, such as the pressure to obtain funding and whether there is suspicion among researchers) was strongly related to engaging in research misbehavior in their sample of US scientists. A high score on the Departmental Norms (the extent to which researchers value norms regarding scholarly integrity in research, such as honesty) subscale was negatively associated with engaging in research misbehavior. When reviewing the individual subscale effects in our study, these two subscale scores are most strongly associated with perceived frequency as well as with perceived impact. Bearing in mind that we focused on perceptions of engagement in research misbehavior by others in the direct environment and not on research misbehavior by the respondent him- or herself, we still think it is reasonable to believe that we observed a similar pattern. In addition, using a large bibliographic sample based on retracted papers, Fanelli, Costas and Larivière  reported that academic culture affects research integrity, again emphasizing the importance of this cluster.
Broadly speaking, the relationship we observed aligns with existing literature that investigates unethical behavior in organizations . A meta-analysis by Martin and Cullen  found that unethical behavior (among which they considered lying, cheating and falsifying reports) was associated with what is called an instrumental climate where individual behavior is primarily motivated by self-interest . Related, Gorsira et al.  found that when employees perceive their work climates to be more ethical, they were less likely to engage in corrupt behavior and vice versa.
Maggio and colleagues  used the previous version of the Publication Pressure Questionnaire and found publication pressure to account for 10% of variance of self-reported research misbehavior among researchers in health professions’ education. This is similar to our findings, although the authors focused on self-reported misbehaviors, whereas we focused on perceptions of engagement in research misbehavior by others in the direct environment. In addition, we used a slightly different set of research misbehaviors and we have investigated researchers from other disciplinary fields as well. Nevertheless, both study results indicate that in an environment where perceived publication pressure is high, the likelihood of researchers reporting research misbehavior will be larger compared to an environment with low publication pressure.
Holtfreter and colleagues  used a list of criminological factors that have been associated with research misconduct and asked academic researchers in the US to indicate which factor they thought contributed most to research misconduct. Regardless of their disciplinary field, researchers reported that the stress and strain to perform (among which was the pressure to publish) was the main cause for research misconduct. Holtfreter and colleagues only distinguished two clusters of factors: ‘bad apples’ (similar to our individual factors) and ‘bad barrels’, comprising both climate and publication factors. That said, the stress and strain items are rather similar to our publication pressure items, supporting the idea of publication pressure as a factor contributing to research misconduct.
Note that we do not claim that individual, climate and publication factors are independent. We found, for instance, publication pressure to account for 16% of variance in perceived frequency when added as first variable. However, when climate factors are already in the model, the cumulative increase of explained variance when adding publication pressure is only 2%, which seems intuitive, since it could be that publication factors influence climate factors, such as when increased publication pressure leads to authorship disputes that in turn potentially damage the research climate in particular research groups . A related reasoning could be that publication pressure may arise as a function of how one’s department and departmental expectations for “productivity” are setup, or may arise at a higher organizational level, to the extent that publication expectations are set or influenced by decision makers above the department level.
Our study’s sample included researchers from different academic disciplines and academic ranks. The findings thus bear relevance to a broad group of academic researchers. Besides, relying on previously validated and repeatedly employed instruments such as the SOURCE  and PPQr  should substantiate the validity of our findings.
We should acknowledge a number of weaknesses in our study. Firstly, a response rate of 17% is arguably low. That said, it is not lower than other recent surveys that are considered valid . In addition, a low response rate in itself does not indicate a response bias. In another study, we tried to estimate response bias in our sample using a wave analysis and found early responders to be similar to late responders . Also, when looking at demographic characteristics, such as academic rank, our responders seemed similar to the population  reducing the concern that our sample is biased, at least with respect to those dimensions. In conclusion, with our response rate, we cannot exclude the possibility of response bias, but we have some reason to believe it should not influence our results substantially.
Secondly, our outcome variables regard perceived misbehavior by others, whereas many studies into misbehavior focus on self-reports of misbehavior by the respondent, including some of the literature we cited. Interestingly, whereas self-reported rates of misbehavior by the respondent have decreased over time, perceptions of the frequency of misbehavior by others have remained more stable . Nevertheless, perceptions of misbehavior measurements may be artificially inflated in situations where various responders have witnessed the same incident. Besides, people are generally more earnest when reporting about others’ misbehavior (and more lenient when it regards their own), also known as the Mohammed Ali effect , which could artificially inflate reported perceptions. Hence, our data may overestimate the actual frequency of perceived research misbehavior. Relatedly, as we measured all outcome and explanatory variables through subjective self-report, the correlations between these variables may be inflated by common-method bias . It seems reasonable to say that perceptions carry credible evidence about the ‘true’ prevalence of research misbehavior and its explanatory variables, although surveying perceptions is by no means conclusive.
Thirdly, the assumption that is implicit in our work is that when participants reported on what research misbehaviors they observed in their field of study, they were largely reporting on what they observed in their own research setting. Although we do not think this is an unreasonable assumption, we nevertheless want to acknowledge that we could not test it explicitly in our survey.
Fourthly, it is a characteristic of multiple regression that the more explanatory variables within a cluster, the larger the explained variance. This should be kept in mind, as our clusters have different numbers of explanatory variables within them.
Finally, our results are cross-sectional in nature so we have to refrain from any causal conclusions.