Welcome to Incels.is - Involuntary Celibate Forum

Welcome! This is a forum for involuntary celibates: people who lack a significant other. Are you lonely and wish you had someone in your life? You're not alone! Join our forum and talk to people just like you.

Theory On Publication Bias in Unpublished Literature

Eremetic

Eremetic

Neo Luddite • Unknown
-
Joined
Oct 25, 2023
Posts
3,780
In this post my goal is to create a short review I can reference in the future arguing in a pretty simple way that you probably can’t reliably assess publication bias in a field by comparing the effect sizes of published and unpublished literature.

When conducting a meta-analysis, researchers sometimes try to correct for the tendency to preferentially publish significant findings by doing separate meta-analyses of published and “unpublished” literature and seeing if there is a significant difference between the two. Typically they find such unpublished studies by looking through abstracts presented in conferences and stored in various databases, looking through dissertations, book chapters, non-journal reports, etc.This is an older method of correcting for publication bias, though it is still relied on with some frequency, and for a while I think it helped researchers to not realize that most published research is significantly inaccurate. For instance, De Smidt et al. (1997) carried out an analysis on social interventions and found that unpublished research exhibited a mean effect size 83% the size of that found in published research. So a bit of inflation was going on due to publication bias, but not much.

E7d8b36f c824 4389 beb6 136f97f76226 406x343


In Lipsey et al. (2003)’s analysis of behavioral and educational interventions this figure was 74%.

228f4b07 9804 44bb 8a8b d9918b13f1b8 367x201


Today it seems fair to say that these estimates of publication bias in behavioral science are probably misleading. For instance, when OSF (2015) attempted high powered replications of 100 well known psychological studies 63% turned up with null results and the mean effect size was half that reported in the original studies. Similarly, Kvarven et al. (2019) find that, on average, meta-analyses produce effects that are 2.6 times larger than what is found in pre-registered replications.

Looking at medicine, in McAuley et al. (2000)’s analysis of medical interventions they concluded that published research exhibited effect sizes 15% larger than unpublished studies. They also note that the rate of null or negative findings was 32% in unpublished (“grey”) literature and 35.3% in published research.

F9204fd8 c1c1 49fd a23b 86dace2af8df 772x374


Similarly Burdett et al. (2003) analyzed grey literature and concluded that research on cancer was only modestly distorted by publication bias

11adf263 1141 4344 a3e2 3923d633fc2d 889x413


But when Begely et al. (2012) reported on attempts to replicate some pre-clinical cancer research they found a replication rate of only 11%. The replication rate in pharmacology has been estimated at 21% (Prinz et al., 2011).

Moreover, it is well known that the statistical power of medical research is far lower than 68%, so 68% of unpublished studies still finding significant positive effects suggests this method leaves in a great deal of bias (Mallet et al., 2017).

Unfortunately I don’t know of any work that directly compares grey literature to high powered pre-registered direct replications. If we had that more certain conclusions could be drawn. But given the widespread nature of replication problems in science that is now commonly acknowledged it seems highly unlikely that the older research just so happened to keep missing the bias that distorts the majority of science.

We’ve seen that this method does detect some degree of publication bias but that degree is significantly lesser than what our most accurate estimates suggest. This, and the note about statistical power made above, and the fact that decades of looking at unpublished literature never triggered the sort of “replication crises” we saw in the mid 2010s, gives us good reason in my view to not view meta-analyses of unpublished data as a good measure of publication bias corrected effect sizes.

Lastly, the fact that we couldn’t realize the extent of publication bias by looking at what researchers produce outside the journal context suggests that much (not all) of the problem with science has to do with the actions authors of research take prior to publishing studies even in non-journal contexts. This limits the degree to which all the blame can be placed on journal editors and reviewers.
 
@OutcompetedByRoomba
 

Similar threads

Lonelyus
Replies
13
Views
735
StephenK
StephenK
lennox
Replies
43
Views
2K
NeverEvenBegan
NeverEvenBegan
Seahorsecel
Replies
27
Views
1K
Seahorsecel
Seahorsecel
lonelysince2006
Replies
13
Views
683
lonelysince2006
lonelysince2006

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Back
Top