One of the topics I wrote about in my essay, A Brief Discussion of Education, was the concept of redefining homeroom. These ideas are in many ways just suggestions. They’re good suggestions, but they shouldn’t be implemented by force. Above all else, school choice should drive changes in the education system, as educational decisions should be bottom up, not top down. And this discussion on homeroom really highlights that idea, especially in terms of education planning.« Continue »
Starting with ArXiv, the idea of preprints has been increasingly in popularity for some time. But now is the time for omniprints. Preprints were a good start. And Crossref has been indexing more and more preprints, with preprints outpacing journal articles by far (Crossref). There are a number of servers, including ArXiv and its derivatives, OSF’s preprint servers, ResearchGate, and more.
I rely exclusively on preprint servers for my publication, mostly out of spite for modern academia and its toxic nature. I absolutely refuse to pay a company so that they can profit off of my work. And honestly, if the goal of publishing is to communicate with other researchers, than traditional publications are not the answer, especially when they’re not open access.
But there’s an issue. A lot of people reject citation of preprints. They want to wait until there’s a “final” version. It’s not even that they’ll scrutinize it more heavily, but rather they will outright use the preprint nature of the paper to ignore it.
Of course, what matters isn’t whether a journal has decided to pick up an article — Wakefield taught us that—but rather what matters is that the content of the article is sound. And in order to determine whether that’s the case, a person has to read the article.
I do think that part of the problem is that the articles are called pre-prints. It’s right in the name: the article hasn’t been printed yet. It hasn’t been completed. That’s why we need to rename preprint servers, which have long since become far more than that, to something else. I’m not really sure what name we’ll end up using, but perhaps “omniprint” is the best option, as it implies “all prints” whether preprint or postprint, draft print, or final print.
Real Open Science
Related to omniprints is the idea of open access, where a journal lets anyone access the publication. I don’t see open access as real open science. It’s certainly a start. After all, if the goal of publishing is to communicate, we need to be able to read what’s being published! But it’s simply not enough. For one thing, publishing in open access journals is often very expensive, literally costing the author thousands of dollars! That’s why we need omniprint.
Actual Peer Review
Of course, in order to take full advantage of omniprint servers, they need to provide a number of tools to allow for an open peer review. Comment systems are useful, but they’re not a great way to quickly measure the quality of the paper. I think a tagging system might be useful, where people can anonymously tag a paper. Tags would probably have to include whether or not the paper is scholarly, if it justifies its position, if it needs improvement, and so on.
And that would be actual peer review. What we think of as peer review is really just one or two reviewers, who might be quite biased, along with an editor. How can we trust two or three people to make a decision about a paper, in an unbiased way? We can’t. That’s why we need omniprint.
Philosophy of Academics included Philosophy of Education, but also questions about the validity of methods of research and scholarly communication. Questions include, but are not limited to is the separation between “fields of study” in academia reasonable or arbitrary?, is there a more reasonable way to measure academic achievement beyond degree level?, and can people still be world class scholars in more than one field?. If I were to categorize these questions, I think they would fit reasonably well into a category that I would call “philosophy of academics.”
This article will be updated over time, as I build a list of questions and concerns that reasonably fit into Philosophy of Academics (PoA) and as I consider further argument for the validity and utility of the topic.
Questions & Problems
The following is an incomplete list of questions and problems within the topic of PoA.
- Is the separation between “fields of study” in academia reasonable or arbitrary?
- Is there a more reasonable way to measure academic achievement beyond degree level?
- Can people still be world class scholars in more than one field?
- Is Consensus a Valid Measure of the Robustness of Theories?
- Does bias exist in research funding and publications, if so, what kind, how does it affect results, and how can it be fixed?
All of the questions are questions about academia specifically because of how academia is reasonably defined. Specifically, Academia is “the environment or community concerned with the pursuit of research, education, and scholarship (Oxford).”
Why is it Philosophy?
It is value theory: these questions ask what value there is in the current practices of academia and whether there are better ones. It is epistemology: it is asking about whether or not there are even boundaries, that can be identified, a priori, between academic fields. What is Philosophy) Finally, it asks “what is possible within academia?”
Not Just Philosophy of Education/Science
Because academia is the environment or community concerned with education, PoA includes philosophy of education. But it is much more than just philosophy of education. Philosophy of education includes many of the questions asked above, especially when it comes to specific division of ideas as they are taught, but not in terms of actual differences between the domains of knowledge. PoA also would include other questions, such as those related to research methods and availability. Whether or not peer review is most reasonable way for researchers to communicate, how much bias, and what kind of bias exists in research funding and publishing all would fall under PoA.
Because science is a method of doing research, Philosophy of Academics is also intimately connected to Philosophy of Science, although the questions are going to be a bit different. PoA would focus more heavily on institutional issues, over the core nature of scientific investigation.
Some of these questions fall into philosophy of science, some into philosophy of education, some into social epistemology, and so on, but they all fall into philosophy of academia, because academia is the “environment or community concerned with the pursuit of research, education, and scholarship.”
While philosophy can exist for philosophy’s sake alone, it can also provide insight into problems and can help to offer solutions. Philosophy of academics can not only help us understand academia, but also help improve it, including through improvement of education specifically, the relationship between research, education, and scholarship, and as a tool to help produce better quality data, and more open research.
I have been involved in a number of arguments about scientific consensus. The most recent debate has convinced me to write about the topic in depth. The idea of scientific consensus has been popular since reports that 97% of all papers offering a position on climate change assert that climate change is happening. I am not going to address the validity of theories on climate change. But it is important to point out a number of issues with relying on consensus among scientists. First, peer reviewed publishing is dominated by a handful of authors.Consider the following statement from the abstract of “Estimates of the Continuously Publishing Core in the Scientific Workforce.”
Using the entire Scopus database, we estimated that there are 15,153,100 publishing scientists (distinct author identifiers) in the period 1996–2011. However, only 150,608 (<1%) of them have published something in each and every year in this 16-year period (uninterrupted, continuous presence [UCP] in the literature). This small core of scientists with UCP are far more cited than others, and they account for 41.7% of all papers in the same period and 87.1% of all papers with >1000 citations in the same period.
Basically, the work of roughly 1% of all publishing scientists account for 41.7% of papers published between 1996 and 2011. From this information alone, we know that any analysis of published articles is going to be skewed heavily towards the bias of 1% of the publishing community.
Aside from the “Academic 1%” there are a number of other biases that are expressed within the academic community. According to “Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data,” the “publish or perish” phenomenon, where academics are required to publish in order to keep their job, seems to result in a bias towards “positive results.” Studies which are inconclusive or are not consistent with the theory being addressed are thrown aside and focus is on papers that have “positive results.” This is doubly problematic as the goal in science is really to falsify a theory, rather than try to support it.
Neil deGrasse Tyson, in a tweet, stated that “anyone who thinks scientists like agreeing with one another has never attended a scientific conference.” This is fairly accurate. However, in at least some cases scientists also do not like expressing views which are very far from consensus. One of the most iconic examples of this situation was the feud between Newton and Hooke. While most people who have taken a science class are quite familiar with Sir Isaac Newton, fewer people are aware of the once prominent Hooke. Robert Hooke was the President of the Royal Society before Newton. Hooke viewed light as a wave. Newton viewed it as a particle. While light is now viewed as both wave and particle, this debate was problematic at the time. Newton actually waited until after Hooke died before publishing some of his work on the topic. That is how strong the fear of “retribution” for bucking the trend was.
Now, all of this together is not a falsification of the ability to measure robustness of a theory using scientific consensus. But it certainly is enough to question why people have so much faith in consensus. If I wanted to show that consensus was not a valid measure, I would need to actually provide statistically significant data. If someone else wanted to show that it was a valid measure, they would need to show evidence as well.
There is also a philosophical argument against the validity of consensus, even among experts. It has to do with the reason why appeal to authority is reasonable. Appeal to authority is often seen as a fallacy. But it is only a fallacy when the person is not reasonably considered an authority on a topic. For instance, if you argued that the Earth was flat because your parents told you that it was, that would be an appeal to false authority. If you argued that the Earth was round because a NASA astronaut who had been to space said it was round, that would be a reasonable appeal to authority.
But what makes appeal to authority valid at all? It has to do with expertise, or at least the assumption of expertise. When appeal to authority is used in a valid sense, a person is assumed to have full knowledge of the topic, and that they will honestly admit any gaps in the available information on the topic. Because of the assumed completeness of knowledge, an appeal to two authorities would have no additional information.
Even without this assumption, there are problems with relying on consensus. In response to my discussion, Jeremiah Traeger asked
Under a Bayesian prediction, if nine out of ten dentists tell you that you have a cavity, are you more or less likely to have a cavity? If nine out of ten doctors tell you that you have cancer, do you seek treatment? If a survey shows that 97 out of 100 actively publishing climate scientists state that global warming is occurring, what do you take from it? – A Tippling Philosopher
These questions are all interesting, but there is no single answer. The interesting point is that the author did not seem to care. The question alone seemed to act as some kind of justification in his mind. But to use Bayesian inference, we need to make a number of assumptions. We need to know something about how knowledge is distributed between individuals. Does each individual have knowledge that the other person does not? How much? Even if there is a difference, it may be so small that after a few experts are put together, there is almost no change in additional knowledge. So just referencing Bayesian inference, as if it somehow provides justification is a non starter. We need to know more information.
I find it disturbing how many people take scientific consensus at face value. The idea that we can measure the robustness of a theory based on how many scientists support it is interesting, but is not tested. And the position is itself a falsifiable statement and therefore, like all potential theories, should be tested before any claim on the topic is made. Until then, we have only one option: look at the data and the theory. See how the theory matches up with actual observations. This can be done by reviewing meta-analyses.
When I first wrote this piece, I did not include one example of consensus without evidence. I have been researching the efficacy of B. pertussis vaccines for some time. While there is a great deal of consensus on the efficacy of the vaccines, scientific data is not actually in line with this consensus. Studies that show efficacy conflate efficacy at preventing disease with efficacy at preventing infection. Multiple studies have found evidence against the view that the B. pertussis vaccines actually help prevent the spread of infection. Yet these studies are largely ignored by the medical community and there is no attempt to confirm them. Indeed, it seems like Pertussis could be nearly an epidemic but is going undetected because the majority of infections are asymptomatic. To see why, here is my analysis of a Chinese study, which could be conducted in the United States.
This discussion has now been going back and forth for a while and I find it interesting to see the responses I get. One of the most interesting is seeing a classic shifting of burden of proof, from someone who is supposedly well versed in philosophical discussions. The following is from a rebuttal against what I have said.
To Quoque, my friend
One of the biggest criticisms against SA is that his own criticisms can be levelled directly back at him. He states things like:
“I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is a valid measure of the robustness of a theory.”
The thing is, I can reverse this:
I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is NOT a valid measure of the robustness of a theory.
He keeps ramming home this notion that we need an empirically evidenced piece of research to show that consensus is a good indicator of “robustness”, but fails to see that for the negation of this, he also needs to offer evidence. Because what is really happening here is pro-consensus is asserting something, and “anti”-consensus is asserting in rebuttal.
This falsification charge can also be levelled at his claim, too. You cannot confirm either claim, only falsify them. Which one gets falsified?
Now, Tippling statement would be reasonable if I actually said that scientific consensus was not a valid measure of the robustness of a theory. However, I never made such a claim. I responded to the claim that it was a reasonable measure, and demanded satisfaction of burden of proof. Therefore Tippling’s statement is just an attempt to make me defend my dismissal of an unevidenced statement.
Since initially writing this article, I have come across many other demands for relying on scientific consensus, and I have also come across another interesting problem. Academia has largely become a system of doctrine. The cult like nature of academia can be seen in answers on a Stack Exchange question and the response to my answer. The question posed was whether or not to publish a result which contradicts an existing mathematical result that has already been published. My answer is “yes.” The order in which academia received the results does not change the validity of either result. Only the result matters. If no error is obvious, after review, then both results should be thought of as being just as sound. To say otherwise (1) introduces doctrine and (2) suggests that somehow the probability of a result being correct is somehow determined by the order in which it was received. While such an idea is absurd, it is an idea that seems to be prominent and an idea which sways consensus: regardless of the validity of the result, any result which is inconsistent with consensus is placed under more scrutiny and therefore requires more evidence than it would if the result was published first and it became the accepted “truth.”