alt.hn

3/4/2026 at 3:02:36 PM

Medical journal says the case reports it has published for 25 years are fiction

https://retractionwatch.com/2026/03/03/canadian-pediatric-society-journal-correction-case-reports-fictional-paediatrics-child-health/

by Tomte

3/4/2026 at 4:42:44 PM

What a mess.

> One author of a case report was surprised to learn of the correction — because the case described in her article is true.

So they managed to mess up even the correction of their giant mess.

> correcting the correction "would be difficult."

I bet. That's why they should have got it right in the first place. I would be absolutely ballistic if they would be libelling my work like that.

by krisoft

3/4/2026 at 4:57:23 PM

Yeah, they seem to have been quite sloppy with these vignettes.

Thought note that in the situation of the mislabeled real case, the formal solution is could be a retraction of the entire highlight article since it is against the (poorly implemented) policy to have a real case study.

Don't know how patient consent for being used in a case study works, did this author get a perpetual license, did they just copy something from another article they wrote, or from an article someone else wrote?

by SiempreViernes

3/4/2026 at 6:24:59 PM

You can see the full article here: https://www.cpsp.cps.ca/uploads/publications/pxy155-Teething...

It looks like it has a short intro paragraph that talks about a specific case with no identifying details (beyond "a previously healthy 4-month-old boy"), citing this report by other doctors: https://pubmed.ncbi.nlm.nih.gov/27503268/ followed by further discussions of physician reports and survey data.

The correction is explicitly listed as applying to that article (https://academic.oup.com/pch/article-abstract/24/2/132/51642...), which itself seems false since that article doesn't seem to include a fictional vignette.

by smelendez

3/4/2026 at 6:01:32 PM

It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case, a gross overcorrection. I wonder if they flinched at the prospect of actually assessing the validity of all of them and decided it was safer to just disclaim them.

by andrewflnr

3/4/2026 at 6:24:18 PM

> It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case

Does it? That's directly at odds with what the article and editor say

by petesergeant

3/4/2026 at 8:40:04 PM

> The corrections come following a January article in New Yorker magazine that mentioned one of the reports — “Baby boy blue,” ... was made up.

> “Based on the New Yorker article, we made the decision to add a correction notice to all 138 publications..."

Emphasis mine.

by andrewflnr

3/5/2026 at 2:05:30 AM

Sure, if you emphasize selectively you can make it sound like it says that. Here are some other quotes from the article that clearly refute your interpretation:

> The journal decided when it first started publishing the article type “that the cases should be fictional to protect patient confidentiality,”

> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.

> “The editor acknowledged that the editorial team is at fault for overlooking the fact that our case was real during the review process,”

It's pretty clear that the journal always thought of these as fictional vignettes, and either didn't realize or didn't care that that had not made that sufficiently clear to the readers. The New Yorker article clued them into the fact that it was a problem, so they added the correction to all of their case studies to clarify that they were intended to be fictional. In (at least) one case, the author also didn't realize they should be fictional, and submitted a real case study which has now been incorrectly corrected.

by sparky_z

3/4/2026 at 9:42:31 PM

> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.

Sounds like they were asking authors for fiction, so probably plenty of them are.

by crummy

3/5/2026 at 7:28:50 PM

They asked the authors for fiction “at times”. Meaning that some are fiction, and some very well might not be. The best they can do is try to contact the authors and see if the case report they wrote is fictional or not. The second best is to admit that they made a mess and say “the case reports might or might not be fictional, we have no way of knowing”.

by krisoft

3/5/2026 at 8:45:39 PM

I suspect you're reading too much into that phrase. It seems more likely to me that the reporter here contacted one or more of the case report authors directly to ask for a copy of what instructions they received from the journal at the time. (This would be good journalistic practice, rather than just take the journal's word for it, when they might have an incentive to lie.) But they obviously couldn't explicitly confirm that every single author received similar instructions, so they used the “at times” phrase to cover their ass.

If they had direct evidence that some author's instructions failed to ask for the case study to be fictionalized, I think they would have specifically said that. It's more definitive, and catches the journal in a lie.

I'm pretty sure what happened here is that:

1) The journal always asked for and thought they received fictionalized case studies.

2) It never occurred to them that they were presenting the case studies in a way that could be misinterpreted. (This is indefensible negligence, but I also understand how it could have happened "innocently".)

3) Once the issue came to light, they issues blanket corrections to every case study study to describe them as fiction because they asked for fiction and edited them all as fiction. (I.e., Didn't do any fact checking or independent confirmation, beyond medical broad strokes.)

4) At least one author didn't read the instructions carefully enough and sent in a real case study, which as the article says, wasn't caught by the editors during the review process. (And really, how would they catch it? If they thought they asked for fiction, they wouldn't be fact checking it.)

I actually think the disclaimer may be appropriate, even on the article that was written as a true story, if it wasn't reviewed as one.

by sparky_z

3/4/2026 at 7:08:12 PM

> I would be absolutely ballistic if they would be libelling my work like that.

Genuine question, could they sue for this? It seems like a pretty good case.

by RobotToaster

3/4/2026 at 3:47:34 PM

Speaking this as a spouse of a medical doctor -- case reports are sometimes a good way to increase the bullet point count in your CV if you are a medical resident. A lot of residents do that just for the sake of beefing up their CVs (to apply for fellowship for example).

by programmertote

3/4/2026 at 5:49:13 PM

I don't see anything wrong with that by itself; with the amount of patients doctors see there should be one once in a while that is worth reporting. Or are such cases so rare that the doctor is incentivized to lie?

by copperx

3/4/2026 at 6:42:10 PM

I think you may have missed the original commenter's point. Residents (and medical students) are highly incentivized to publish unrealistic numbers of papers and case reports. One case report doesn't cut it—you need literally dozens of publications to match into some of the most competitive residency and fellowship programs. The NRMP (match organizer) publishes a document every 2 years that summarizes all of these stats. The 2024 version is in the link below, and page 12 supports what I'm saying.

https://www.nrmp.org/wp-content/uploads/2024/08/Charting_Out...

by sxg

3/4/2026 at 8:19:20 PM

This is another example of Goodhart's law in action, right?

Weirdly Pediatrics (chart 7) skews the other way (less publications tended to get into residency programs)? Are those doctors/administrators/programs somehow seeing through the nonsense?

by digitalPhonix

3/4/2026 at 9:52:28 PM

I wonder if it's because pediatrics is not competitive unless applying to a top program.

by husarcik

3/4/2026 at 8:52:42 PM

27.7 works to match derm. Holy crap that’s a lot. No way. We would be gods of skincare by now.

by peyton

3/4/2026 at 7:07:23 PM

In vet med, case studies are still pretty important, but that's because vet med is in its infancy compared to human medicine. At least one case study, usually two, are required to be eligible to take boards. Future board renewals, I think for most boards, are "published one original piece of research or two case studies" among a slew of other requirements.

by snapetom

3/4/2026 at 4:23:12 PM

> The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP.

I can see the reason where fictional cases could be used here as teaching aid - based on real cases/ilnesses but simplified to make the learning points succinctly, but surely if the cases are being cited elsewhere someone should have raised the issue earlier?

by helsinkiandrew

3/4/2026 at 4:47:58 PM

Since it was for teaching I expect the case studies were always showing typical features of real cases, so there's nothing in the case vignette itself to give it away unless the author picks a funny name or something like that.

Rather it would be the entire form of these short highlight articles that would make you keep searching for a proper citation, unless you're lazy or pressed for time.

by SiempreViernes

3/4/2026 at 5:13:09 PM

Wouldn't citing actual cases be a HIPAA violation? I can see why they would invent example cases, based on real ones, especially if they are fairly pedestrian cases.

I mean. Except if your pedestrian example does not reflect reality, then that is bad.

by ultropolis

3/4/2026 at 6:38:25 PM

It's a privacy violation to reveal information that identifies the patient. It is not a violation (and is extremely common) to recount details without noting names, places, or even dates. Unless you already have access to a database of records you won't be able to track it down.

It's even common during talks to display diagnostic images that have had any identifying marks redacted.

by fc417fc802

3/4/2026 at 6:00:35 PM

HIPAA is American, not Canadian.

by reenorap

3/4/2026 at 5:13:50 PM

I think this is mainly a case of the common "didn't notice when crucial literature for own published content was retracted, get caught with pants down when the replication police come knocking".

Obviously the poor labelling is bad, but 9 bad citations per year isn't the end of science and better labelling wouldn't discourage all the lazy authors who chose to cite these highlight articles, it'll just shift whos is to blame.

The real problem is hosting a review article about research that was retracted, and it sounds like they aren't moving very quickly on taking that piece down.

by SiempreViernes

3/4/2026 at 5:35:56 PM

This is fine, though somewhat belated. But it does nothing to deal with the public's growing distrust of science in general, and medical science in particular.

by TomMasz

3/4/2026 at 6:37:53 PM

The "growing distrust" is due to a concerted disinformation campaign which is independent of the facts.

There was indeed much negative information that the public was not aware of, and they should perhaps have held more skepticism than they did. But the gleeful acceptance of outright anti-science lies implies that they were never really in a position to make a sound judgment one way or the other.

In those circumstances I'll settle for people reaching the correct action: that practically all accepted medicine is correct and they should follow their doctor's advice. If they choose to over-inflate the importance of things that do indeed go wrong, then they are the ones failing to reach valid conclusions.

by jfengel

3/4/2026 at 8:16:39 PM

[flagged]

by tw85

3/4/2026 at 9:20:15 PM

I thought it was "contract and spread" you can't even get your own disinfo straight

by casey2

3/4/2026 at 8:48:57 PM

Like I said: every word out of your mouth there is a lie. Yes, I know the links you're about to hand me, to right-wing disinformation sites and actual news articles that don't say what you're pretending they say.

These are straight out falsehoods, collected for you deliberately, which you are repeating because you didn't even pretend to examine them critically. There is no way to discuss the actual mistakes made during the pandemic when it takes me ten times as long to refute the lies you're spreading.

by jfengel

3/5/2026 at 3:03:08 PM

You're very quick to throw around unsubstantiated accusations of spreading misinformation while providing nothing of substance to back it up. Pounding on the keys forcefully doesn't carry an argument. Come back when your temper tantrum subsides.

by tw85

3/4/2026 at 5:26:24 PM

Original HN discussion about the case:

https://news.ycombinator.com/item?id=46789205

by snapetom

3/4/2026 at 6:50:42 PM

Thank you, this really adds the missing context to this update about fictional case studies. The original read was compelling and also alarming.

by skyberrys

3/5/2026 at 1:34:01 AM

Serious question: Why do doctors change their practice so much based on one case study? Surely, even if there isn't any malice, a doctor can make a mistake?

by Bratmon

3/5/2026 at 2:36:55 AM

I wondered this too after reading the original New Yorker article a few weeks back and was quite surprised.

However, the article also made me think that once a practice is adopted it’s hard for it to change even if the evidence support changing. (Which is how I expected it to be from the outside)

I figured there was some context that I was missing as to why some things are quicker to adopt and others less so. Maybe because adopting this change was seen to be “saving” lives by being more cautious about the how medicines and feeding interact - and reverting the change is “risky” in case there is truth to it.

by myiosaccount765

3/5/2026 at 7:13:05 AM

Case studies are used in medical decision making only when there is no better form of evidence available, or there is a gap in current evidence. It is not the first place to look

by cfu28

3/5/2026 at 2:42:43 AM

[dead]

by aaron695

3/4/2026 at 4:40:49 PM

“Pics or it didn’t happen,” goes a long way in my book.

by learingsci

3/4/2026 at 9:46:54 PM

You may want to update that, given recent advances in generative AI.

No idea what you should update to, mind you, but the old era of photographic evidence is on its last jpgs.

by MarkusQ

3/4/2026 at 5:44:00 PM

They had access to ChatGPT for last 25 years!

by sekuraai

3/4/2026 at 5:36:24 PM

Too late, it's already in the bloodstream, LLMs will be recommending things to pediatric doctors and families from fabricated archives for years, probably.

by damnesian

3/4/2026 at 5:41:12 PM

It’s all an hallucination.

by Towaway69

3/4/2026 at 7:20:05 PM

That's a serious issue: How could retractions work with LLMs? How could they be made to work?

Accuracy rots over time, and at varying rates. It's not just scientific research.

by mmooss

3/4/2026 at 4:10:29 PM

In the era of GitHub etc, if you're not giving out every single data point of your research, it should be assumed it's fake.

by sourcegrift

3/4/2026 at 4:30:26 PM

The article is about case reports, not about empirical studies. Putting a fake case report on GitHub wouldn't make it any less fake.

by fsh

3/4/2026 at 4:50:55 PM

> Putting a fake case report on GitHub wouldn't make it any less fake.

Much easier to review for whomever wants to review it.

by qwertox

3/4/2026 at 5:14:36 PM

Do you know what a case report is?

by NewsaHackO

3/4/2026 at 5:01:53 PM

Obviously just sending it via email to the reviewers works just fine in practice anyway, the problem is really that they published a summary piece about research that was later retracted, but didn't take down their own article.

by SiempreViernes

3/4/2026 at 5:16:40 PM

Would it be easier, though? Medical records (in the US) are covered by HIPAA and, to my knowledge, there is no anonymized canonical record, similar to what we have for legal decision. Without that, how difficult would it be to just "make shit up"?

by drivingmenuts

3/4/2026 at 5:36:28 PM

And then there be large amounts of fake data for the next generation of AIs to learn from.

What is stopping anyone from faking the data they use in their research papers?

Sure it might be verifiable but if the data was made to give the desired results, i.e. faked to be what is required for the paper.

by Towaway69

3/4/2026 at 4:28:35 PM

out of context that makes sense...but in the context of a case report how do you implement that? The patients have privacy rights and the authors/doctors have a responsibliity to protect them. That doesn't justify this but it does force a conversation about what 'every single data point' means. Does it mean the patient's real name and social security number? their complete medical chart?

Case reports are descriptive not determinative and should be treated as such by other scholars. They are 'I saw this' not 'this is generalizably true'. They can (and often are) replicated or countered but they are not per se research as you are thinking about it. Whether it is fictitious or not, other scholars should be cautious in citing them as proof/evidence in papers that fit into the 'research' mold.

by avs733

3/4/2026 at 5:01:01 PM

From a legal perspective, journal article authors can implement this by following the official HHS guidance for de-identification. This applies to any use of protected health information (PHI), not just case reports.

https://www.hhs.gov/hipaa/for-professionals/special-topics/d...

The IRB for a particular organization can impose additional restrictions.

by nradov

3/4/2026 at 5:25:07 PM

I don't mind the fact that the case reports were fictional -- actual cases can be problematic in terms of privacy as it may be easy to ascertain the patient's identity from the details -- but not putting a notice that it was fictional (or altered from a real case for privacy), for teaching purposes, is pretty bad.

by insane_dreamer

3/4/2026 at 3:44:28 PM

The detail that makes this more than a labeling error: the fictional nature appeared in the journal's author guidelines, not in the published articles. Researchers who cited these 61 papers had no way to distinguish them from genuine case reports. 218 citations later, the fiction is embedded in secondary analyses and literature reviews written by people who had no idea.

The "Baby Boy Blue" (2010) case is the clearest example of the harm. An infant allegedly exposed to opioids through breast milk. That case influenced clinical guidance on codeine safety in nursing for years. The CARE guidelines (Consensus-based Clinical Case Reporting Guidelines) exist specifically to create transparency in case reporting. They're voluntary, which is how a journal can run a 25-year undisclosed fiction program and technically say the authors knew.

by newzino

3/4/2026 at 4:32:23 PM

Doesn't sound like these works were "full" articles, but rather something more like short review articles.

by SiempreViernes

3/4/2026 at 3:41:41 PM

I think research should be assumed fiction until it’s peer reviewed.

by october8140

3/4/2026 at 3:44:52 PM

There is not good evidence that peer review improves quality and there is perhaps some to the contrary (many predatory journals are peer reviewed). The arxiv (unreviewed) is among the most reliable sources available.

by contubernio

3/4/2026 at 4:34:41 PM

Yeah, it's almost like science is better when the scientific method is applied to everything, instead of delegating validation to some third party based on credentials or authority or social status.

by observationist

3/4/2026 at 3:52:13 PM

What do you suggest instead? Certainly not giving up I hope.

by ranger_danger

3/4/2026 at 3:53:50 PM

I think it's a bit different considering the goal was a teaching tool of well recognised conditions

>all or almost all were cases of very well recognized conditions [...] where a single case report would not generate any interest or ever be cited.

by Rallen89

3/4/2026 at 4:05:17 PM

That is an ironic proviso given that the article clearly states

"The peer-reviewed articles don’t state anywhere the cases described are fictional."

Peer review by peers who are trained by non-replicable science is not helpful...

by readthenotes1

3/4/2026 at 3:46:57 PM

Independently replicated. Reviewed says pretty much nothing.

by moi2388

3/4/2026 at 4:27:15 PM

Peer review is a sniff test. It cannot guarantee that the results are correct and the conclusions are right. It is just designed to limit some kinds of errors. Replication is important.

by kergonath

3/4/2026 at 4:47:19 PM

Case studies can't be replicated. They aren't experiments.

by roywiggins

3/4/2026 at 9:19:49 PM

you can find multiple cases that are comparable. one case study is an anecdote. multiple studies for the same kind of case...

by em-bee

3/4/2026 at 4:59:29 PM

Tough to replicate an isolated case study?

by ambicapter

3/5/2026 at 5:09:20 AM

Especially if they are made up.

by moi2388