top of page
Search

THE DARK SIDE OF SCIENCE

Updated: Feb 15, 2021

Do not get me wrong, science is our best tool, we owe our recent growth to research and most of our practice should be guided by constant reading. All fields have benefited from new scientific findings. Millions of lives were saved, technology is rapidly reaching new standards and we are now able to provide much better services to the population. Yet, there is a dark side to it, and the scientific method as we know it has its own problems.


Bias


A famous study surprised the scientific world in 2005 claiming that most published research findings were actually false. And he proved it, with science. The author refers to bias as one of the main causes of disruption of truth. Selective or distorted reporting, poorly designed studies, manipulated statistics are all described as a means to influence results (Ioannidis, 2005). All this because we are not able to detach from our own dogmas. Not to talk about money.


Let’s be honest here, we were all guilty of protecting our own beliefs against reason at some point in the past, and we probably still are. If you search for long enough you are likely to find some paper that supports your convictions. Frankly, my own articles that I have posted on this website are probably a reflection of this to some extent. When we start a literature review, we inevitably tend to search for significant results that interest us.


Design, replicability and statistics


One of the main problems relies on poor study design. I cannot keep count of the number of studies I have started reading only to find major flaws concerning its methodological design, which will directly influence the validity of the outcomes. Still, these studies continue to be published every day and are probably encouraging practitioners to follow their practical applications.


Consequently, one of the main principles of science is disregarded. Replicability is one the central requirements of a well-designed study. Also, it allows researchers to continually develop data to reinforce or contradict findings. Further investigation will then allow confirmatory designs, such as systematic reviews or meta-analyses, which Ioannidis (2005) identified to be much more reliable and less susceptible to bias. However, journals have adopted a prevailing mentality of new and isolated discoveries, discouraging replicability, providing professionals with overwhelming amounts of research of limited relevance, as discussed by Hubbard & Lindsay (2013)

Another important weakness of science is how dependent it is on statistical significance (p-values), and how much statistical significance is dependent on the good will of researchers. Science has become so dependent on this method that a recent study revealed that as much as 96% of the studies that included a p-value, declared statistical relevant findings (Chavalarias, Wallach, Li, & Ioannidis, 2016).


Additionally, it is well known that researchers are required to publish consistently and since statistical significance has been vital for journals to publish their work, authors are forced to either design studies which are more likely to get significant results or manipulate statistical significance, a method known as p-hacking. An alternative to p-values has been referred to as effect size, where the magnitude of the differences is explained, instead of simply determining significance on the basis of probability alone (Sullivan & Feinn, 2012).


Money


Ioannidis (2005) also found that the greater the financial interest the less likely for results to be trustworthy. This is no surprise since science requires investment and investors require results, a clear conflict of interest. This is why private funded studies will lean towards outcomes that favour the sponsors. This will also suggest that low-profit research is lacking financing, even if it is relevant.


Then, there is the paradox of scientific journals. Typically, articles are written by authors and sent to the journals either for free or with a cost covered by the researchers. Peer review, which by the way has failed to prove its efficacy (Jefferson, Alderson, Wager, & Davidoff, 2002), is done by academics, for free. Consequently, journals create expensive subscriptions in order to allow researchers access to their articles, complicating the process of accessibility to recent research which will therefore slow the course of elaboration of new studies (Smith, 2006).


Medicine and sports science


The first problem in this field is the excessive flexibility in study designs and small heterogeneous samples which, as also identified by Ioannidis (2005), leads to lower probability of achieving reliable results.


However, the main flaw with sports and rehab science is that we are yet to find a relevant way to provide findings that take in consideration the complexity of the human being as a multifactorial individual. Ultimately, researchers are forced to ignore characteristics such as individual values and past experiences. But, this might lead to dangerous generalisation of findings (Mykhalovskiy & Weir, 2004; Yu, Du, Yi, Wang, & Guo, 2019).


Human behaviour and social sciences are our best bet in the future to approach this setback. However, it is also known that funding for this field is still considerably low and that very complex study designs would need to be applied to get meaningful outcomes. Until then, as professionals in this field we should acknowledge this limitation and apply current science with caution.


Conclusion


If scientific knowledge was total and correctly applied, everything would work in a more efficient way, but this is not the case. As seen, there are some major issues with current research. Either way, I will always stand my ground as an evidence-based practitioner, we must never forget that this is our most reliable way to acquire knowledge and provide solutions to new problems. Just remember, critical reasoning should be an essential skill of each and every one of us, even when reading research.


References


· Chavalarias, D., Wallach, J. D., Li, A. H. T., & Ioannidis, J. P. A. (2016). Evolution of reporting P values in the biomedical literature, 1990-2015. Journal of the American Medical Association, 315(11), 1141–1148.


· Hubbard, R., & Lindsay, R. M. (2013). The significant difference paradigm promotes bad science. Journal of Business Research, 66, 1393–1397.


· Ioannidis, J. P. A. (2005). Why most published research findings are false. PLos Med, 2(8), 0696–0701.


· Jefferson, T., Alderson, P., Wager, E., & Davidoff, F. (2002). Effects of editorial peer review: A systematic review. Journal of the American Medical Association, 287(21), 2784–2786.


· Mykhalovskiy, E., & Weir, L. (2004). The problem of evidence-based medicine: directions for social science. Social Science and Medicine, 59, 1059–1069.


· Smith, R. (2006). The trouble with medical journals. Journal of the Royal Society of Medicine, 99, 115–119.


· Sullivan, G. M., & Feinn, R. (2012). Using Effect Size—or Why the P Value Is Not Enough . Journal of Graduate Medical Education, 4(3), 279–282.


· Yu, Z., Du, H., Yi, F., Wang, Z., & Guo, B. (2019). Ten scientific problems in human behavior understanding. CCF Transactions on Pervasive Computing and Interaction, 1(1), 3–9.




 
 
 

Comments


bottom of page