Tag Archives: journals

The elite colleges, academic writing, and the Journal of Universal Rejection.

What makes something elite? For elite colleges and academic journals, a large part is selectivity, the lower fraction of people who can go to your college or publish in your journal, or earn your credential, the more selective, thus the most elite. Harvard, boasts that “the best” apply, and of these, only 3% get in. Thus Harvard selects for the top 1%, or so they claim. These are not selected as the brightest, or most moral or motivated, but by a combination: they are the most Harvardian.

The top 20 most selective US colleges, 2022-23 according to Nathan Yau, FlowingData.com

Selectivity is viewed as good. That this 1% can get into Harvard makes the students elite and makes Harvard desirable. Some lower-class Ivy colleges (Columbia, for example) have been found to cheat to pretend higher selectivity; they’ve exaggerated the number of people who apply so they can inflate their rejection rate, and justify a high tuition, and presumably a high salary for their graduates. And it’s self-sustaining. Generally speaking, college professors and high-powered executives are drawn from elite institutions. Elite grads pick other elite grads as their way to get the best material, with the best education.

By this measure, selectivity, The Journal of Universal Rejection is the most elite and best. It’s the journal you should definitely get. The reject every article submitted on every subject. They are thus more elite than Harvard or Cal Tech, and more select than the quorum of US presidents, or Olympic gold winners, or living Chess champions, and they got there by just saying no. Many people send their articles, by the way, all rejected.

My lesson from this, is that selectivity is a poor metric for quality. Just because an institution or journal that is select in some one aspect does not mean that it will be select in another. Top swimmers and footballers rarely go to Harvard, so they have to pick from a lower tear of applicants for their swimming and football teams. It’s the same with the top in math or science, they apply to Cal Tech, with the rejects going to Stanford or Princeton. As for top chess players or US Navy Seals, a Harvard degree does nothing for them; few seals go to Harvard, and few Harvard students could be Seals. Each elite exists in its own bubble, and each bubble has its own rules. Thus, if you want to be hired as a professor, you have to go to the appropriate institution, though not necessarily from the top most selective.

From Nature, 2024. 20% of all academics come from just 8 schools, 40% come from the top 21.

As for journals to read or write in, an elevated reader like you should publish where you can be read, and understood, and perhaps to change things for the better, I think. Some money would be nice too, but few scientific journals offer that. Based on this, I have a hard time recommending scientific journals, or conferences. More and more, they charge the writer to publish or present, and offer minimal exposure of your ideas. They charge the readers and attendees such high fees that very few will see your work; university libraries subscribe, but often on condition that not everyone can read for free. Journal often change your writing too, sometimes for the better, but often to match the journal outlook or style, or just to suggest (demand) that you cite some connected editor. JofUR is better in a way, no charge to the author, and no editorial changes.

Typically, journals limit your ability to read or share your work, assuming they accept it, then they expect you to review for them, for free. So why do academics write for these journals? They’re considered the only legitimate way to get your findings out; worse, that’s how universities evaluate your work. University administrators are chosen with no idea of your research quality, and a requirement of number-based evaluation, so they evaluate professors by counting publications, particularly in elite (selective) journals, and based on the elite (selective) school you come from. It’s an insane metric that results in awful research and writing, and bad professors too. I’ve come to think that anyone, outside of academia, who writes in a scientific journal is a blockhead. If you have something worthwhile to say, write a blog, or maybe a book, or find a free, open access journal. In my field, hydrogen, the only free, open access journals are published in Russia and Iran.

And just for laughs, if you don’t mind the futility of universal rejection, there’s JoUR. Mail your article, with a self addressed return, or email it to j.universal.rejection@gmail.com. You’ll get a rejection notice and you’ll join an un-elite group: rejected, self effacing academics with time on their hands.

ROBERT BUXBAUM, January 16, 2025. If, for some reason, you want to get your progeny into an elite college, my niece, a Harvard grad., has a company that does just that, International College Counselors, they help with essays, testing, and references, and nudge your progeny to submit on time.

Genetically modified food not found to cause cancer.

It’s always nice when a study is retracted, especially so if the study alerts the world to a danger that is found to not exist. Retractions don’t happen often enough, I think, given that false positives should occur in at least 5% of all biological studies. Biological studies typically use 95% confidence limits, a confidence limit that indicates there will be false positives 5% of the time for the best-run versions (or 10% if both 5% tails are taken to be significant). These false positives will appear in 5-10% of all papers as an expected result of statistics, no matter how carefully the study is done, or how many rats used. Still, one hopes that researchers will check for confirmation from other researchers and other groups within the study. Neither check was not done in a well publicized, recent paper claiming genetically modified foods cause cancer. Worse yet, the experiment design was such that false positives were almost guaranteed.

Séralini published this book, “We are all Guinea Pigs,” simultaneously with the paper.

As reported in Nature, the journal Food and Chemical Toxicology retracted a 2012 paper by Gilles-Eric Séralini claiming that eating genetically modified (GM) maize causes cancerous tumors in rats despite “no evidence of fraud or intentional misrepresentation.” I would not exactly say no evidence. For one, the choice of rats and length of the study was such that a 30% of the rats would be expected to get cancer and die even under the best of circumstances. Also, Séralini failed to mention that earlier studies had come to the opposite conclusion about GM foods. Even the same journal had published a review of 12 long-term studies, between 90 days and two years, that showed no harm from GM corn or other GM crops. Those reports didn’t get much press because it is hard to get excited at good news, still you’d have hoped the journal editors would demand their review, at least, would be referenced in a paper stating the contrary.

A wonderful book on understanding the correct and incorrect uses of statistics.

A wonderful book on understanding the correct and incorrect uses of statistics.

The main problem I found is that the study was organized to virtually guarantee false positives. Séralini took 200 rats and divided them into 20 groups of 10. Taking two groups of ten (one male, one female) as a control, he fed the other 18 groups of ten various doses of genetically modified grain, either alone of mixed with roundup, a pesticide often used with GM foods. Based on pure statistics, and 95% confidence, you should expect that, out of the 18 groups fed GM grain there is a 1- .9518 chance (60%) that at least one group will show cancer increase, and a similar 60% chance that at least one group will show cancer decrease at the 95% confidence level. Séralini’s study found both these results: One group, the female rats fed with 10% GM grain and no roundup, showed cancer increase; another group, the female rats fed 33% GM grain and no roundup, showed cancer decrease — both at the 95% confidence level. Séralini then dismissed the observation of cancer decrease, and published the inflammatory article and a companion book (“We are all Guinea Pigs,” pictured above) proclaiming that GM grain causes cancer. Better editors would have forced Séralini to acknowledge the observation of cancer decrease, or demanded he analyze the data by linear regression. If he had, Séralini would have found no net cancer effect. Instead he got to publish his bad statistics, and (since non of the counter studies were mentioned) unleashed a firestorm of GM grain products pulled from store shelves.

Did Séralini knowingly design a research method aimed to produce false positives? In a sense, I’d hope so; the alternative is pure ignorance. Séralini is a long-time, anti GM-activist. He claims he used few rats because he was not expecting to find any cancer — no previous tests on GM foods had suggested a cancer risk!? But this is mis-direction; no matter how many rats in each group, if you use 20 groups this way, there is a 60% chance you’ll find at least one group with cancer at the 95% confidence limit. (This is Poisson-type statistics see here). My suspicion is that Séralini knowingly gamed the experiments in an effort to save the world from something he was sure was bad. That he was a do-gooder twisting science for the greater good.

The most common reason for retraction is that the article has appeared elsewhere, either as a substantial repeat from the authors, or from other authors by plagiarism or coincidence. (BC Comics, by Johnny Hart, 11/25/10).

It’s important to cite previous work and aspects of the current work that may undermine the story you’d like to tell; BC Comics, Johnny Hart.

This was not the only major  retraction of the month, by the way. The Harrisburg Patriot & Union retracted its 1863 review of Lincoln’s Gettysburg Address, a speech the editors originally panned as “silly remarks”, deserving “a veil of oblivion….” In a sense, it’s nice that they reconsidered, and “…have come to a different conclusion…” My guess is that the editors were originally motivated by do-gooder instinct; they hoped to shorten the war by panning the speech.

There is an entire blog devoted to retractions, by the way:  http://retractionwatch.com. A good friend, Richard Fezza alerted me to it. I went to high school with him, then through under-grad at Cooper Union, and to grad school at Princeton, where we both earned PhDs. We’ll probably end up in the same old-age home. Cooper Union tried to foster a skeptical attitude against group-think.

Robert Buxbaum, Dec 23, 2013. Here is a short essay on the correct way to do science, and how to organize experiments (randomly) to make biassed analysis less likely. I’ve also written on nearly normal statistics, and near poisson statistics. Plus on other random stuff in the science and art world: Time travel, anti-matter, the size of the universe, Surrealism, Architecture, Music.