- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
As a big fan of IF, I find this really depressing.
This shouldn’t be taken seriously. It’s a very quick and dirty analysis presented at a conference without peer review. Start worrying when/if the scientific paper comes out, which might be years or never.
Doesn’t matter if it’s peer reviewed. There isn’t enough data to establish causality.
Chances are people who do those sorts of diets are already at risk. That’s the super important data points we don’t have.
So even if their peers confirm the data is accurate and their analysis is accurate it doesn’t mean anything without further study.
Yep, I don’t see any way they could prove statistical significance as they could not reject the null hypothesis.
Peer review is unfortunately not a magic bullet. And conference abstracts do get a form of peer review because that’s how they get accepted for the conference.
The actual problem is that academics can pad their CVs doing terrible research and publishing it with alarmist headlines.
When they’ve written it up, it will get through peer review, somewhere, somehow, because peer review does not work. The fight will happen in the letters pages (if anyone has the energy) and won’t change a damn thing anyway.
Presentations don’t get peer review, at least not in biology (my field). I agree that peer review is totally broken though.
I didn’t say they did. But authors don’t just get to submit an abstract and have it accepted, it has been selected by whatever committee process was set up to sift the submissions. Many conferences will do a better job than the journals but mileage varies all over the fucking shop.
But my main bugbear here is the idea that peer review means anything. The dross that gets published is beyond depressing. But it’s probably worth noting that dross is much less likely to get submitted to a conference because a) fuck all CV points for an abstract and b) getting accepted means registering for the conference and turning up to get your peer review in person. Scammers don’t do that. Although there have been entire scam conferences so … heuristics don’t work any which way, really.
Sounds like this “study” (aka a self-reported, retrospective, epidemiological survey - which is a type of statistics that I think just confuses the public to call a study but whatever) needs a lot more work to say anything with certainty. The kicker in the article is this I think:
“…the different windows of time-restricted eating was determined on the basis of just two days of dietary intake.” Yikes. That, and it sounds like they didn’t control for any of the possible confounding variables such as nutrient intake, demographics, weight, stress, or basically any other risk factors or possible explanations. Its entirely possible that once they actually control for this stuff, the correlation could shrink to almost nothing or even reverse when we see that people who tried this diet were just baseline higher risk than who didn’t.