Discussion
A medical journal says the case reports it has published for 25 years are, in fact, fiction
october8140: I think research should be assumed fiction until it’s peer reviewed.
moi2388: Independently replicated. Reviewed says pretty much nothing.
contubernio: There is not good evidence that peer review improves quality and there is perhaps some to the contrary (many predatory journals are peer reviewed). The arxiv (unreviewed) is among the most reliable sources available.
sourcegrift: In the era of GitHub etc, if you're not giving out every single data point of your research, it should be assumed it's fake.
newzino: The detail that makes this more than a labeling error: the fictional nature appeared in the journal's author guidelines, not in the published articles. Researchers who cited these 61 papers had no way to distinguish them from genuine case reports. 218 citations later, the fiction is embedded in secondary analyses and literature reviews written by people who had no idea.The "Baby Boy Blue" (2010) case is the clearest example of the harm. An infant allegedly exposed to opioids through breast milk. That case influenced clinical guidance on codeine safety in nursing for years. The CARE guidelines (Consensus-based Clinical Case Reporting Guidelines) exist specifically to create transparency in case reporting. They're voluntary, which is how a journal can run a 25-year undisclosed fiction program and technically say the authors knew.
SiempreViernes: Doesn't sound like these works were "full" articles, but rather something more like short review articles.
fsh: The article is about case reports, not about empirical studies. Putting a fake case report on GitHub wouldn't make it any less fake.
observationist: Yeah, it's almost like science is better when the scientific method is applied to everything, instead of delegating validation to some third party based on credentials or authority or social status.
qwertox: > Putting a fake case report on GitHub wouldn't make it any less fake.Much easier to review for whomever wants to review it.
kittikitti: https://onlinelibrary.wiley.com/doi/10.1111/jpc.14206Maybe we should revisit the routine practice of infant male genital mutilation?
krisoft: What a mess.> One author of a case report was surprised to learn of the correction — because the case described in her article is true.So they managed to mess up even the correction of their giant mess.> correcting the correction "would be difficult."I bet. That's why they should have got it right in the first place. I would be absolutely ballistic if they would be libelling my work like that.
SiempreViernes: Yeah, they seem to have been quite sloppy with these vignettes.Thought note that in the situation of the mislabeled real case, the formal solution is could be a retraction of the entire highlight article since it is against the (poorly implemented) policy to have a real case study.Don't know how patient consent for being used in a case study works, did this author get a perpetual license, did they just copy something from another article they wrote, or from an article someone else wrote?
snapetom: Original HN discussion about the case:https://news.ycombinator.com/item?id=46789205
insane_dreamer: I don't mind the fact that the case reports were fictional -- actual cases can be problematic in terms of privacy as it may be easy to ascertain the patient's identity from the details -- but not putting a notice that it was fictional (or altered from a real case for privacy), for teaching purposes, is pretty bad.
Towaway69: And then there be large amounts of fake data for the next generation of AIs to learn from.What is stopping anyone from faking the data they use in their research papers?Sure it might be verifiable but if the data was made to give the desired results, i.e. faked to be what is required for the paper.
damnesian: Too late, it's already in the bloodstream, LLMs will be recommending things to pediatric doctors and families from fabricated archives for years, probably.
TomMasz: This is fine, though somewhat belated. But it does nothing to deal with the public's growing distrust of science in general, and medical science in particular.
sekuraai: They had access to ChatGPT for last 25 years!
smelendez: You can see the full article here: https://www.cpsp.cps.ca/uploads/publications/pxy155-Teething...It looks like it has a short intro paragraph that talks about a specific case with no identifying details (beyond "a previously healthy 4-month-old boy"), citing this report by other doctors: https://pubmed.ncbi.nlm.nih.gov/27503268/ followed by further discussions of physician reports and survey data.The correction is explicitly listed as applying to that article (https://academic.oup.com/pch/article-abstract/24/2/132/51642...), which itself seems false since that article doesn't seem to include a fictional vignette.
andrewflnr: It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case, a gross overcorrection. I wonder if they flinched at the prospect of actually assessing the validity of all of them and decided it was safer to just disclaim them.
petesergeant: > It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their caseDoes it? That's directly at odds with what the article and editor say
programmertote: Speaking this as a spouse of a medical doctor -- case reports are sometimes a good way to increase the bullet point count in your CV if you are a medical resident. A lot of residents do that just for the sake of beefing up their CVs (to apply for fellowship for example).
snapetom: In vet med, case studies are still pretty important, but that's because vet med is in its infancy compared to human medicine. At least one case study, usually two, are required to be eligible to take boards. Future board renewals, I think for most boards, are "published one original piece of research or two case studies" among a slew of other requirements.
jfengel: The "growing distrust" is due to a concerted disinformation campaign which is independent of the facts.There was indeed much negative information that the public was not aware of, and they should perhaps have held more skepticism than they did. But the gleeful acceptance of outright anti-science lies implies that they were never really in a position to make a sound judgment one way or the other.In those circumstances I'll settle for people reaching the correct action: that practically all accepted medicine is correct and they should follow their doctor's advice. If they choose to over-inflate the importance of things that do indeed go wrong, then they are the ones failing to reach valid conclusions.
tw85: No, it isn't. Anthony Fauci and Rochelle Walensky were both on record, on television, claiming that anyone who takes the covid vaccine will not contract the virus (sterilizing immunity). The medical community and public health in particular disgraced themselves by going all in on demonizing anyone who raised questions about the covid jabs, mocking Ivermectin as "horse paste", claiming cloth masks were very effective against respiratory viruses (they are not, and this has been known for decades), and even that the concept of acquired immunity from recovering from an infection doesn't exist. These are all trivially verifiable things that happened during covid madness, and instead of walking back some of their false claims they simply doubled down on blaming the anti-vaxxers (even after jab uptake exceeded 80%).
learingsci: “Pics or it didn’t happen,” goes a long way in my book.
MarkusQ: You may want to update that, given recent advances in generative AI.No idea what you should update to, mind you, but the old era of photographic evidence is on its last jpgs.
Bratmon: Serious question: Why do doctors change their practice so much based on one case study? Surely, even if there isn't any malice, a doctor can make a mistake?
andrewflnr: > The corrections come following a January article in New Yorker magazine that mentioned one of the reports — “Baby boy blue,” ... was made up.> “Based on the New Yorker article, we made the decision to add a correction notice to all 138 publications..."Emphasis mine.
sparky_z: Sure, if you emphasize selectively you can make it sound like it says that. Here are some other quotes from the article that clearly refute your interpretation:> The journal decided when it first started publishing the article type “that the cases should be fictional to protect patient confidentiality,”> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.> “The editor acknowledged that the editorial team is at fault for overlooking the fact that our case was real during the review process,”It's pretty clear that the journal always thought of these as fictional vignettes, and either didn't realize or didn't care that that had not made that sufficiently clear to the readers. The New Yorker article clued them into the fact that it was a problem, so they added the correction to all of their case studies to clarify that they were intended to be fictional. In (at least) one case, the author also didn't realize they should be fictional, and submitted a real case study which has now been incorrectly corrected.
myiosaccount765: I wondered this too after reading the original New Yorker article a few weeks back and was quite surprised.However, the article also made me think that once a practice is adopted it’s hard for it to change even if the evidence support changing. (Which is how I expected it to be from the outside)I figured there was some context that I was missing as to why some things are quicker to adopt and others less so. Maybe because adopting this change was seen to be “saving” lives by being more cautious about the how medicines and feeding interact - and reverting the change is “risky” in case there is truth to it.
ambicapter: Tough to replicate an isolated case study?
moi2388: Especially if they are made up.
cfu28: Case studies are used in medical decision making only when there is no better form of evidence available, or there is a gap in current evidence. It is not the first place to look
crummy: > While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.Sounds like they were asking authors for fiction, so probably plenty of them are.
krisoft: They asked the authors for fiction “at times”. Meaning that some are fiction, and some very well might not be. The best they can do is try to contact the authors and see if the case report they wrote is fictional or not. The second best is to admit that they made a mess and say “the case reports might or might not be fictional, we have no way of knowing”.
sparky_z: I suspect you're reading too much into that phrase. It seems more likely to me that the reporter here contacted one or more of the case report authors directly to ask for a copy of what instructions they received from the journal at the time. (This would be good journalistic practice, rather than just take the journal's word for it, when they might have an incentive to lie.) But they obviously couldn't explicitly confirm that every single author received similar instructions, so they used the “at times” phrase to cover their ass.If they had direct evidence that some author's instructions failed to ask for the case study to be fictionalized, I think they would have specifically said that. It's more definitive, and catches the journal in a lie.I'm pretty sure what happened here is that:1) The journal always asked for and thought they received fictionalized case studies.2) It never occurred to them that they were presenting the case studies in a way that could be misinterpreted. (This is indefensible negligence, but I also understand how it could have happened "innocently".)3) Once the issue came to light, they issues blanket corrections to every case study study to describe them as fiction because they asked for fiction and edited them all as fiction. (I.e., Didn't do any fact checking or independent confirmation, beyond medical broad strokes.)4) At least one author didn't read the instructions carefully enough and sent in a real case study, which as the article says, wasn't caught by the editors during the review process. (And really, how would they catch it? If they thought they asked for fiction, they wouldn't be fact checking it.)I actually think the disclaimer may be appropriate, even on the article that was written as a true story, if it wasn't reviewed as one.
krisoft: > If they had direct evidence that some author's instructions failed to ask for the case study to be fictionalized, I think they would have specifically said that.Which they do. They specifically say that. “Neither the instructions for authors from 2010 — when Koren and his coauthor Michael Rieder would have written their article — nor the linked list of article types — state the cases are fictionalized, or fictional.”“An archived version from September stated, ‘Each highlight is a teaching tool that presents a short clinical example, from one of the studies or one-time surveys,’ with no mention of fiction.”These are direct quotes from the article. The exact kind you are asking for. With inline links to the archived documents. And yes it is very definitive.> I'm pretty sure what happened here is that:No need to speculate. Just read the article.> 1) The journal always asked for […] fictionalized case studies.This is false. As evidenced by the article.