Wikipedia talk:Identifying reliable sources (medicine)/Archive 22

From WikiProjectMed
Jump to navigation Jump to search
Archive 15 Archive 20 Archive 21 Archive 22 Archive 23 Archive 24 Archive 25

The second sentence of this guideline

Reads: ... it is vital that any biomedical information is based on reliable, third-party, published secondary sources and that it accurately reflects current knowledge.

There are at least three problematical terms in that: reliable, secondary and knowledge.

"Reliable" is kind of categorical. Something either is or isn't reliable - at least in the way we're using it there. We could, instead, use "most reliable". Actually, I'd prefer "most trustworthy" because that says it more straightforwardly in my opinion. Whatever. Using "reliable" categorically is a mistake.

"Secondary" is wrong. Some of our best medical content is based on tertiary sources such as textbooks, and there is, rarely, a place for primary sources.

"Knowledge" is a very imprecise term. What most of us are trying to convey is the current expert consensus (or lack thereof). --Anthonyhcole (talk · contribs · email) 07:31, 5 November 2015 (UTC)

Just addressing "Reliable" I believe this word is in there because WP:MEDRS is a refinement of WP:RS. It directs the readers that the sources should be reliable, which is a good thing. One point of the position to add health is that its needed to address snake oil advocates. One of the main tacits I have seen is using unreliable sources to insert claims. Reliable is needed to stop unreliable sources, I dont think any of the responders want unreliable sources used in WP. Adding most reliable or trustworthy would add complexity, and end up in countless arguments over what is the most reliable or trustworthy source AlbinoFerret 16:52, 5 November 2015 (UTC)
Agree with AlbinoFerret that adding "health" would certainly help to have clearly applicable scope of MEDRS when working on homeopathy, AYUSH, acupuncture, and all the other forms of faith healing. I doubt, however, that the collective will of the community is up to doing that. LeadSongDog come howl! 17:47, 5 November 2015 (UTC)
Well, I am against adding the word health, but I can see why some would want it. AlbinoFerret 18:50, 5 November 2015 (UTC)
  • "Most reliable" is not acceptable. The goal is good content, not an impressive list of sources. If a source is reliable for a statement, then that's good enough, even if it would be possible to replace it with a gold-plated source that supports exactly the same good content. The most basic requirement of a guideline on sourcing is to identify the line between "barely good enough" and "not quite good enough". The purpose is not to encourage editors to make the perfect be the enemy of the good and then blank anything that's "reliable" but not "the most reliable" source possible.
  • The type of textbooks that MEDRS prefers are secondary sources. I agree with you that reputable tertiary sources and important primary sources should be used (appropriately), and even preferred in some cases. A tertiary source such as a lower-level textbook or lay-accessible reference work is IMO ideal for gross anatomy and for basic descriptions of common diseases.
  • That many editors are trying to convey only current expert consensus, rather than a complete encyclopedic description of a subject, sounds like a problem statement to me, rather than a point to brag about. WhatamIdoing (talk) 04:20, 6 November 2015 (UTC)
Thanks for all the feedback. I've got a busy weekend, but will give the above serious thought. --Anthonyhcole (talk · contribs · email) 13:27, 6 November 2015 (UTC)

Popular press

In this interview, Atul Gawande says that his articles in The New Yorker are reviewed more rigorously than his articles in the scientific journals. I think Gawande is just as reliable a source as Susan Dentzer. Should his assessment go in the article along with hers?

http://www.statnews.com/2015/11/11/atul-gawande-health-care-journalism/
Health Atul Gawande, surgeon and storyteller, on health care’s ‘dramatic transformation’
By Rick Berke
November 11, 2015

Q: You write both magazine articles and academic papers. How does the editing process compare?

A: The editing process in journalism, I think, sometimes offers better protection for the quality of the ideas and writing than our peer review process. At The New Yorker, they will not only look to see if I have references and sources for everything I say, they will look up the references and call the sources. And they will also search themselves to make sure I haven’t cherry-picked the information. It’s a much more rigorous process than the one I go through with my scientific work.

--Nbauman (talk) 20:25, 14 November 2015 (UTC)

I detect a smidgen of hyperbole. For some journals, one would argue that the peer review process is as robust as Gawande suggests is being used for his popular work. The core journals have a reputation to uphold and will have articles reviewed rigorously, as well as having the statistics verified by statistical editors. Furthermore, the professional readership of journals such as NEJM and The Lancet is such that cherry-picking is very difficult without a grumpy "letter to the editor". To paraphrase Torvalds: many eyeballs make every error shallow.
I might wish to cite Gawande or similar authors in the "society and culture" sections of medical articles, but I would not wish to place them on the level of Cochrane or the Annals of Internal Medicine if it came to the discussion of clinical benefit etc. JFW | T@lk 15:15, 15 November 2015 (UTC)
No, you can't assume Gawande was being hyperbolic just because you disagree with his conclusion. He was speaking on the record, for quotation, and his own credibility was at stake. The New Yorker's fact-checking is legendary, and has itself been the subject of many articles in WP:RS.
OTOH I read lots of peer-reviewed articles, and they often make major mistakes, most commonly when they find association and conclude causation. I've also heard editors from the NEJM and Lancet tell me about their own mistakes, and advise me not to take them too seriously. https://www.nasw.org/users/nbauman/polmedj.htm
You are certainly not arguing that the NEJM and Lancet, much less the second-string publications, are perfect. The only question is, how imperfect. I think the New Yorker is usually as accurate, and sometimes more accurate, than the peer-reviewed journals, even the major ones. For example, reviewers assume that the manuscripts accurately represent the data. Journalists don't. Sometimes journalists find out that the investigator fabricated clinical data. Look up the Pfizer Trovan case. Sometimes the news section in Science magazine or the Wall Street Journal will explain the problems with a clinical study that the peer-reviewed journals never get around to explaining. That happened with Bard's devices.
My real problem with that section is that the introductory statement, "The popular press is generally not a reliable source for scientific and medical information in articles," is unsupported and false. You can't demonstrate that the New Yorker is not a reliable source for scientific and medical information, because it's not true.
The word "generally" is a WP:WEASEL word. How do you define "generally," and what's the evidence? The truth, according to peer-reviewed studies (like Schwitzer's), is that there are reliable and unreliable journalistic sources. This entry is mixing them both together under the ruberic of "The popular press," as if there were no difference between the New Yorker and the Huffington Post. The accurate statement is, "The accuracy of journalistic sources vary," and I can support that with evidence (like Scwitzer's). This section should follow the published evidence rather than some Wikipedia editor's personal opinions. --Nbauman (talk) 21:45, 15 November 2015 (UTC)

Here's a report in a major journal with evidence that some journalists and news sources are a more reliable source of medical information than the peer reviewed journals they report on -- about 6% of the journalists and news sources in this study.

http://archinte.jamanetwork.com/article.aspx?articleID=2301146
Research Letter: Reporting of Limitations of Observational Research
Michael T. M. Wang, Mark J. Bolland, Andrew Grey.
JAMA Intern Med. 2015;175(9):1571-1572. doi:10.1001/jamainternmed.2015.2147.
September 2015

This study assesses the journal publication and reporting of observational studies, to see whether they properly distinguished between association and causation.

The authors examined 81 studies in core journals, 48 accompanying editorials, 54 journal press releases, and 319 news stories.

Any study limitation was mentioned in:

70 of 81 (86%) source article Discussion sections,

26 of 48 (54%) accompanying editorials,

13 of 54 (24%) journal press releases,

16 of 81 (20%) source article abstracts

61 of 319 (19%) associated news stories.

An explicit statement that causality could not be inferred was mentioned in:

8 of 81 (10%) source article Discussion sections,

7 of 48 (15%) editorials,

2 of 54 (4%) press releases,

3 of 81 (4%) source article abstracts,

31 of 319 (10%) news stories

For the 18 articles that contained the causality limitations in the Abstract, discussion, editorial or journal press release, only 9% of the news stories reported the causality limitation.

Note that the news stories were more likely (10%) to mention the causalty limitations than the abstracts (4%) and press releases (4%). So something like 6% of the journalists actually read and understood the complete article, and added limitations that the press release and abstract didn't include.--Nbauman (talk) 00:32, 17 November 2015 (UTC)

Looks convincing, but it was published in a major peer-reviewed journal. Got anything from The Daily Mail saying the same thing? (Guy runs as everyone starts throwing thing at him...) --Guy Macon (talk) 03:20, 17 November 2015 (UTC)
I think that the key word in Gawande's quotation is "sometimes". WhatamIdoing (talk) 06:40, 17 November 2015 (UTC)
Yes, and Gawande says without qualification that the New Yorker is more rigorous than the peer-reviewed journals he publishes in. So on Gawande's authority, we can divide the "Popular press" into publications that are more or less rigorous than the major peer-reviewed journals. --Nbauman (talk) 10:18, 17 November 2015 (UTC)

This discussion bears little relevance to our guideline, and it ignores the difference in content between medical journals and high quality popular press. Different types of texts lend themselves better to certain forms of fact-checking. The New Yorker isn't used to present medical research, and calling up the sources etc. may be more thorough in one sense, but will be far less thorough in other aspects – because the review is not done by scientists, but by professional journalists.

Noone here is saying that the traditional peer review is perfect, or that other sources cannot accurately point out flaws in research. That does not mean that The New Yorker all of a sudden would become a higher quality source for medical content than the best medical journals. CFCF 💌 📧 02:15, 18 November 2015 (UTC)

According to the JAMA Intern Med article, some news stories were more accurate than the journal articles themselves, since the news stories identified weaknesses in the study that the journal articles did not identify. Many journal articles find associations and make unjustified claims of causation. As the JAMA Internal Med article found, journalists often identify that flaw in their stories. If you look at the Health News Reviews website, you'll see that medical journalists are trained to review these and other medical claims. It may be suprising to you that journalists can sometimes be more accurate than peer-reviewed journals, but that's what JAMA Internal Med found. I think we have to go with the facts. --Nbauman (talk) 07:30, 18 November 2015 (UTC)
Yes, that's what I said, I'm not the least surprised that journalists can do a good job. However they do not perform peer review as is standard within the scientific community. They are good at certain things, less so on other things. CFCF 💌 📧 17:33, 18 November 2015 (UTC)
Yes, journalists do not (usually) perform peer review as is standard within the scientific community.
However, that's not the same as "The popular press is generally not a reliable source for scientific and medical information in articles," as the project page now says, based on three citations which don't support that broad statement. "Generally" is a weasel word, which is not supported by evidence.
The evidence from Wang in JAMA Intern Med for example is that some popular news sources are more accurate than some peer-reviewed journals, and in fact some news stories point out exaggerations and factual errors in peer-reviewed articles. That's what Health News Reviews http://www.healthnewsreview.org/ teaches journalists to do.
Contrary to the project page, many news stories do "report adequately on the scientific methodology and the experimental error," and ""express risk in meaningful terms." That's what Health News Reviews http://www.healthnewsreview.org/toolkit/ teaches journalists to do, and Health News Reviews frequently reviews news stories to see whether or not they do that.
One fundamental problem with that statement is that it ignores the range of quality among "the popular press." There's a big difference between OTOH the New Yorker or the news section of Science magazine, and OTOH Medical News Today http://www.medicalnewstoday.com/articles/302809.php or the Huffington Post. --Nbauman (talk) 03:09, 19 November 2015 (UTC)

I don't see any reliable sources here for the following claims of fact in the policy page:

  • Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease.
  • News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms.

They seem to be statements of a Wikipedia editor's personal opinion. Please give me any supporting evidence from reliable sources. I've given peer-reviewed citations above with evidence for the opposite. --Nbauman (talk) 14:50, 19 November 2015 (UTC)

No, your evidence strongly supports the assertion. 6% of journalists being acceptable supports that the popular press is generally of poor quality. CFCF 💌 📧 20:08, 19 November 2015 (UTC)
The key number is not what percentage of news articles are acceptable but rather what percentage of news articles that Wikipedia editors try to use as citations are reliable. The editors cull out the obviously bad sources and only use what they believe to be good sources. --Guy Macon (talk) 21:49, 19 November 2015 (UTC)
User:CFCF, did you read Wang, JAMA Intern Med? --Nbauman (talk) 12:44, 20 November 2015 (UTC)
User:CFCF, I am still waiting for you to supply a WP:RS for the following statements:
  • Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease.
  • News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms.
--Nbauman (talk) 15:58, 21 November 2015 (UTC)
There are sources there, but much of it is inference and experience from Wikipedia editors. Policy documents don't need citations for each and every statement, and if you desperately want one for this uncontroversial statement feel free to use the source BullRangifer gave below. CFCF 💌 📧 17:54, 21 November 2015 (UTC)
User:CFCF, There are sources there, but they don't support the claims. The Mother Jones article that BullRangifer doesn't support either of those claims. You have given no WP:RS to support your claims.
You are saying that those two statements are not supported by WP:RS, but by the "inference and experience from Wikipedia editors."
That means it's based on personal opinions of Wikipedia editors.
Who are those Wikipedia editors and what inference and experience do they use to support those opinions?
Can their personal opinions override contrary evidence in articles published in the peer-reviewed journals?
As the Guardian says, you're entitled to your own opinins but you're not entitled to your own facts. --Nbauman (talk) 19:37, 21 November 2015 (UTC)

There is neither a need for a WP:RS, nor a lack of one. Guidelines do not need sourcing in that sense, and the sources used are used to strengthen the message, not to be its sole basis. This guideline is formed through long-term consensus and yes it is opinion of the editors who have written it. If you wish to change it in any major way you should start an RfC.
Currently no evidence has been provided that the contrary statement – the popular press being a good source is true, and there is no support for such a view among editors on this talk page. CFCF 💌 📧 21:23, 21 November 2015 (UTC)

Also we could use the article you linked http://archinte.jamanetwork.com/article.aspx?articleID=2301146 to support several of the statements. Only a very small minority of sources are what could be deemed acceptable, and that is what our text expressly says. It doesn't say that all sources are bad, but that most are. CFCF 💌 📧 21:26, 21 November 2015 (UTC)
User talk:CFCF, did you read the entire JAMA Internal Medicine study? --Nbauman (talk) 18:50, 22 November 2015 (UTC)
Yes, and obviously more thoroughly than you – as I can tell you that we allow none of the sources analyzed in that paper (accompanying editorials; journal press releases; and news stories). CFCF 💌 📧 18:59, 22 November 2015 (UTC)
We allow none of those sources? I believe MEDRS says that (for example) press releases "should always be used with caution", which is not quite the same as "not allowed".
As for news articles (which might be more reliable than press releases?), I actually cited one yesterday on a "health" issue: https://en.wikipedia.org/w/index.php?title=Yarumal&diff=prev&oldid=692137872 And you know what? I think that's an okay use of a news article to support "health" information, and that the article was improved (albeit not "perfected") by that addition. WhatamIdoing (talk) 23:26, 25 November 2015 (UTC)
Allow might not be the correct term, but strongly discourage is. The edit you linked is poor practice in that it cites what is at best a quaternary source, citing the Washing Post which cites Yahoo Health which cites New Scientist which cites the original article. I have since substituted with the primary source: [1] CFCF 💌 📧 12:02, 26 November 2015 (UTC)
There's no such thing as a quaternary source. Either it mindlessly repeats the primary, in which case it's still primary, or it provides evaluation and analysis of the original, in which case it's secondary. See WP:LINKSINACHAIN.
Also, why did you tag the source you preferred with {{primary source inline}}? WhatamIdoing (talk) 21:19, 26 November 2015 (UTC)
No, but rather it is a quaternary citation – which is not exactly an indicator of quality. I tagged it as primary source inline because I was unsure of the quality of the study, not having read all of it. CFCF 💌 📧 21:58, 26 November 2015 (UTC)
The same article (as in: same author, same words except for heads and some [mostly small] copyedits) was published by multiple news sources. Why do you think that multiple publications running the same story is an indicator of low quality? Usually, we assume that being picked up by a wire service and run in multiple publications is a positive indication for quality. WhatamIdoing (talk) 01:44, 27 November 2015 (UTC)
That they are citing each-other typically increases the risk that facts are distorted. The same is true when writing review articles in medicine, and it is often advisable to instead cite original articles when writing reviews. The practice of citing citations is sometimes called "citation chaining" and is generally frowned upon - example diagram, Uncovered case in New Scientist Invasive species caused nearly half of extinctions? It’s hearsay, APA style manual CFCF 💌 📧 18:20, 27 November 2015 (UTC)
Also, I have to add that being picked up by a news wire does not indicate quality, it indicates that the results are interesting enough to become news. CFCF 💌 📧 19:52, 27 November 2015 (UTC)
(1) User:CFCF. major news services with daily deadlines can't cite each other when they are reporting from news releases. The news release is typically issued on, for example, Monday morning with a release date of Wednesday evening. The wire services also send the release to their clients on Monday morning, with the same release date. As a condition of getting the story on Monday, the news services have to agree not to release the story before Wednesday evening. That gives them enough time to check facts, check with independent experts, and write the story. So by Wednesday evening, they all publish their stories at the same time. They never have an opportunity to read each other's stories. If you actually read those cannabis stories, you'd see that the ones in the major newspapers were significantly different.
(2) Medical news stories go through extensive fact checking in wire services, and then go through further fact checking in major newspapers. This is what they teach their jouralists: http://www.healthnewsreview.org/ http://www.healthnewsreview.org/toolkit/tips-for-understanding-studies/ https://www.elsevier.com/connect/how-do-you-know-if-a-research-study-is-any-good
The major wire services have special departments for medical news, and medical news stories get special treatment. They're assigned by medical editors with years of experience, they're written by reporters with years of experience, and they're reviewed by one or usually two medical editors. At least one MD usually reviews every story. They have more rigorous fact-checking procedures than non-medical news. And medical reporters are more likely to get fired if they make mistakes. These reporters regularly talk to the top academic doctors.
That post below, "This Is Why You Have No Business Challenging Scientific Experts," said, "Read all the online stuff you want, Collins argues—or even read the professional scientific literature from the perspective of an outsider or amateur. You'll absorb a lot of information, but you'll still never have what he terms 'interactional expertise,' which is the sort of expertise developed by getting to know a community of scientists intimately, and getting a feeling for what they think." The editors and reporters at the wire services interact with doctors and academic researchers all the time, hang out with them at conferences, spend time in their labs, and feel free to call or email them any time they have a doubt about a story.
This is also true of the major newspapers. I don't know how many MDs the New York Times has working as reporters, but I can think of about half a dozen off the top of my head. And then there's Atul Gawande.
The social science literature about journalism has long reported that journalists take in an enormous amount of information, and go through a process of tremendous selection and condensation. Somewhere between 1% and 10% of press releases finally get into the paper. The wire services get paid because they help perform this process of selecting important news and discarding unimportant news. So they have a strong financial incentive to select only important stories. So if a medical story gets reported by a wire service, it's because that story has been reviewed by editors and journalists who know the medical specialty, have checked with academic doctors, and have selected that story as particularly important.
The reason why they go through this process is that one of their most important audience is doctors themselves. Wire services provide medical news feeds for doctors, and they have a zero tolerance for mistakes. So they have strong financial incentive to get their facts right, as judged by their doctor-readers.
And then in a magazine like New Scientist, they go through the same process again.
I've been reading JAMA, NEJM, BMJ and Lancet for over 20 years, and I talk to their editors regularly. They have a lot of strengths and a few weaknesses, which they readily admit. They also have a lot more respect for the medical press than you do. When a medical journal article is reported by Reuters and the NYT, it goes through an additional layer of fact-checking, and so it can be more accurate than the journal article. The most obvious example is journal articles that claim causation instead of association. --Nbauman (talk) 00:44, 28 November 2015 (UTC)
Citing each other? Are we even reading the same article? First of all, I don't see any "citation" to Yahoo! News on either page, unless you mean linking to this news story, which is written by someone else, in a different publication, in a different year – hardly a sign of a problem. Secondly, this is the original, written by Rowan Hooper. There's no mention of the Washington Post anywhere in it – or at least none that my web browser is capable of finding, even by looking at the HTML source. This is the slightly edited copy of the same article in the Washington Post, also with Rowan Hooper's name in the byline, and published four days later. There's no "citation" to the original, although there are a few links to other New Scientist stories.
Are you actually claiming that a news story is less reliable if it cites sources (and/or builds the web to related stories that might interest readers)? WhatamIdoing (talk) 06:42, 28 November 2015 (UTC)

Wikipedia:Identifying reliable sources

This matter has now extended to the Wikipedia:Identifying reliable sources guideline; see this, this and this edit. Flyer22 Reborn (talk) 01:58, 19 November 2015 (UTC)

This edit too. Flyer22 Reborn (talk) 02:00, 19 November 2015 (UTC)

Wikipedia:Administrators' noticeboard/Incidents#User:Sławomir Biały keeps changing Wikipedia:Identifying reliable sources without consensus --Guy Macon (talk) 02:30, 19 November 2015 (UTC)
The popular press is typically not a good source for medical claims. People are often misquoted. Doc James (talk · contribs · email) 19:01, 22 November 2015 (UTC)
Do you have a WP:RS to support that? --Nbauman (talk) 23:01, 22 November 2015 (UTC)
Well, there are something like 15 down below that support that statement. CFCF 💌 📧 23:07, 22 November 2015 (UTC)
Would you be willing to rewrite the Popular press section basing it on the evidence and conclusions of those 15 studies? You already told me that you would not. --Nbauman (talk) 02:17, 23 November 2015 (UTC)
You are correct, I don't have the time, and even if I did I have better things to do on my spare time. There just isn't anything there that needs to be rewritten, the guideline is perfectly satisfactory in this regard already.CFCF 💌 📧 13:15, 23 November 2015 (UTC)

Example: Skunk cannabis

Here's an example of what you see in reviews by medical experts of news articles for their accuracy.

http://www.nhs.uk/news/2015/11November/Pages/High-strength-skunk-cannabis-linked-to-brain-changes.aspx
Generally, the UK media covered the story accurately, but some of the headline writers overstepped the mark....
This type of study cannot prove cause and effect, only suggest a possible link, so "proof" is too strong a term. Also, the study didn't look at how the small changes in the brain associated with skunk affected thoughts or other brain functioning, so it was not fair to say skunk "wrecks" the brain.
This study wasn't designed to look at the effect of skunk on mental health illnesses, only small changes in brain structure, so it tells us little about the link between cannabis use and the development of a mental health illness.

However, the news stories got that inaccurate interpretation from the principal investigator of the study himself:

http://www.independent.co.uk/life-style/health-and-families/health-news/skunk-cannabis-can-cause-significant-brain-damage-a6751121.html
"We found that frequent use of high potency cannabis significantly affects the structure of white matter fibres in the brain, whether you have psychosis or not."

So relevant to this discussion:

  • The NHS expert review concluded that "Generally, the UK media covered the story accurately."
  • The original peer-reviewed journal article was unreliable because it failed to distinguish between association and causation, and its principal investigator claimed causation, which went beyond his evidence.

This clearly contradicts the current wording, "The popular press is generally not a reliable source for scientific and medical information in articles." So who do we follow, an anonymous Wikipedia editor with no medical credentials or the expert review of the NHS? --Nbauman (talk) 19:20, 27 November 2015 (UTC)

--Nbauman (talk) 19:20, 27 November 2015 (UTC)

The wording is not of "an anonymous Wikipedia editor", but reflects the consensus of the editors of this page. Also that review is not an expert review by the NHS, it was produced by a company called Bazian, at least get your facts straight. Neither is anyone here arguing we should allow for more primary medical sources, which is basically the only thing which that link argues against.CFCF 💌 📧 19:42, 27 November 2015 (UTC)
That wording is no longer the "consensus of the editors on this page." Some of us disagree, so there is no longer a consensus.--Nbauman (talk) 23:06, 27 November 2015 (UTC)
Consensus ≠ unanimity. And the consensus seems solid. Alexbrn (talk) 17:40, 29 November 2015 (UTC)
The disagreement is about whether we should include factual claims in the guidelines that are false according to the peer-reviewed literature. Some editors argue that we should follow the peer-reviewed literature, or at least base it on factual evidence. Others argue that we should follow "common sense" (as they interpret it), or their own personal opinions, and that we can ignore the peer-reviewed literature, and factual evidence, in factual claims. We have not resolved that disagreement. The editors who believe that common sense and their own personal opinions are enough have not explained why they believe that, or why the rest of us, and Wikipedia, should follow their personal interpretations of "common sense" and their personal opinions. We also have disagreements about what the peer-reviewed literature actually says. I don't think that's a consensus. --Nbauman (talk) 21:43, 30 November 2015 (UTC)

Popular press cited in peer-reviewed journals

http://archinte.jamanetwork.com/article.aspx?articleID=2471609
Viewpoint
Corporate Funding of Food and Nutrition ResearchScience or Marketing?
Marion Nestle, PhD, MPH
JAMA Intern Med. Published online November 23, 2015.

doi: https://www.doi.org/10.1001/jamainternmed.2015.6667

Two recent investigative articles in the New York Times illustrate the concerns about biases introduced by industry funding. The first3 described the support by Coca-Cola of academic researchers who founded a new organization, the Global Energy Balance Network, to promote physical activity as a more effective method than calorie control (eg, from avoiding sugary sodas) for preventing obesity. The second4 analyzed emails obtained through open-records requests to document how Monsanto, the multinational agricultural biotechnology corporation, on the one hand, and the organic food industry, on the other, recruited professors to lobby, write, and testify to Congress on their behalf.
Both articles3,4 quoted the researchers named in these reports as denying an influence of industry funding and lamenting the paucity of university research funds and the competitiveness of federal grants.

3. O’Connor A. Coca-Cola funds scientists who shift blame for obesity away from bad diets. New York Times. August 9, 2015. http://well.blogs.nytimes.com/2015/08/09/coca-cola-funds-scientists-who-shift-blame-for-obesity-away-from-bad-diets/?_r=0. Accessed October 22, 2015.
4. Lipton E. Food industry enlisted academics in G.M.O. lobbying war, emails show. New York Times. September 5, 2015. http://www.nytimes.com/2015/09/06/us/food-industry-enlisted-academics-in-gmo-lobbying-war-emails-show.html. Accessed October 22, 2015.
--Nbauman (talk) 17:33, 29 November 2015 (UTC)

http://www.nejm.org/doi/full/10.1056/NEJMp1508818
Perspective
History of Medicine
Preventing and Treating Narcotic Addiction — A Century of Federal Drug Control
David T. Courtwright, Ph.D.
N Engl J Med 2015; 373:2095-2097
November 26, 2015
DOI: https://www.doi.org/10.1056/NEJMp1508818
The key objectives — reducing fatal overdoses, medical and social complications, and injection-drug use and related infections — are difficult to achieve if abstinence-oriented treatment is the only option available. Yet that remains the situation in many places, particularly in rural locales, where officials dismiss methadone and buprenorphine as unacceptable substitute addictions. “IF YOU WANT PROBATION OR DIVERSION AND YOUR ON SUBOXIN,” declared an erratically spelled sign outside a Kentucky courtroom, “YOU MUST BE WEENED OFF BY THE TIME OF YOUR SENTENCING DATE.”5
5. Cherkis J. Dying to be free. Huffington Post, January 28, 2015 (http://projects.huffingtonpost.com/dying-to-be-free-heroin-treatment).
--Nbauman (talk) 17:40, 29 November 2015 (UTC)

Unsourced factual claims OK on policy pages?

  • I categorically reject CFCF's assertion that factual claims made on policy and guideline pages do not need sources. We can base policy on the consensus of the community, but when we make a factual claim like "most medical news articles fail to discuss important issues such as evidence quality" we need a source that actually says that most medical news articles fail to discuss important issues such as evidence quality. Citations to pages that are clearly labeled "Science Opinion" don't cut it. Sampling the top 10 bestselling UK newspapers for one week (which would include The Daily Mail, a paper that is not only unreliabl;e but makes many, many heath claims, thuis skewing the sample) doesn't cut it, One ex-journalist publishing his analysis of 500 health articles that he cherry picked "from mainstream media in the US" doesn't cut it either. The wording that CFCF keeps removing ("In two studies, most medical news articles failed to discuss important issues such as evidence quality") is supported by the sources. CFCF's preferred version is not. This is not the first time CFCF decided to repeatedly force his version into the guideline through reverts against a clear consensus, and if we don't put a stop to this disruptive behavior now, it won't be the last. --Guy Macon (talk) 23:18, 21 November 2015 (UTC)
On the contrary this is the second time that long-standing consensus content has been changed by the above user despite lack of community or consensus support. I have several times suggested starting an RfC upon wishes to make major controversial changes that do not reflect prior consensus. These statements are supported by all of the sources that have so far been linked. What seems to be lacking is the understanding that the sources only supports in part the text in the guideline, the rest being value judgments made by the writers and community.
One essay that comes to mind is Wikipedia:You don't need to cite that the sky is blue, because the statements are so general and so uncontroversial that this discussion quickly turns inane. Even the dissenting voices support our generalization that "most" (not all) popular science sources are of too low quality, and the reason there aren't sources so far for each an every statement is that they haven't been considered necessary. There are a multitude of sources out there, but finding and adding a proper one is frankly a waste of time, because it isn't needed for this type of statement. CFCF 💌 📧 00:13, 22 November 2015 (UTC) 
  • PMID 26257667 – also points to a multitude of other cases in the brief, we can cite those as well
  • PMID 26208573 – Neuroscience "Neurohype can be sustained by metaphors that favour oversimplifications"
  • PMID 25129138 – "In 2012, the American Civil Liberties Union reported that of the single-sex education programs they investigated, nearly all cited pseudoscientific material from the popular press, not peer-reviewed literature"
etc., etc. Can we drop this absurdity? CFCF 💌 📧 00:25, 22 November 2015 (UTC)
I will simply restate what Nbauman wrote above:
You have given no WP:RS that actually supports your claims.
The sources you do cite do not contain what you say they contain.
You have said on multiple occasions that your claims do not need to be supported by WP:RS, but by the "inference and experience from Wikipedia editors."
That means that they are based on the personal opinions of Wikipedia editors.
Who are those Wikipedia editors and what inference and experience do they use to support those opinions?
Can their personal opinions override contrary evidence in articles published in the peer-reviewed journals?
You are entitled to your own opinions but you are not entitled to your own facts. --Guy Macon (talk) 19:11, 22 November 2015 (UTC)
User:CFCF, many people like to believe that their own opinion is as obvious as "the sky is blue," but other people disagree that those opinions are so obvious.
There is no consensus on your claims about newspapers such as the ones I cited above. In fact your claims violate WP:WEASEL, they are untrue, they overgeneralize, they misstate the factual evidence, and there is peer-reviewed evidence to demonstrate that they are false. You are conducting WP:OR by cherry-picking peer-reviewed articles that you agree with, and you are misinterpreting the articles which often disagree with you, in part because you are apparently reading the abstract and not the full article.
Significantly, several of the articles were comparing flaws in the news reporting with flaws in the original journal articles and their abstracts, and the flaw in the news articles was that they didn't always correct the flaws in the journal articles. So your conclusion could just as easily be that peer-reviewed journal articles are at least as unreliable because they also exaggerated the facts and incorrectly inferred causation from association.
The irony is that you are making claims that only the peer-reviewed journals, and not the news media, are reliable. These claims are based on your own personal opinions, and your selective WP:OR based on articles that you haven't even read. I want to base WP's guidelines on provable facts as demonstrated by the peer-reviewed scientific literature, that I've read (and discussed with the authors). You refuse to accept the very peer-reviewed scientific literature that you claim is more accurate than news stories.
The scientist is rejecting peer-reviewed science, and the journalist is defending peer-reviewed science.--Nbauman (talk) 19:23, 22 November 2015 (UTC)
Start an RfC to change what is long-standing consensus or stop wasting everyone's time with inanity. CFCF 💌 📧 19:28, 22 November 2015 (UTC) 
You are required to follow our WP:CIVIL and WP:NPA policies. Stop posting comments such as "stop wasting everyone's time with inanity" Attacking other editors does not strengthen your augments. Quite the opposite, actually. As for posting an RfC, there is no need, One editor agreed with part of what are trying to do and everyone else has told you that you are wrong. WP:STICK and WP:IDHT apply here.. --Guy Macon (talk) 23:09, 22 November 2015 (UTC)
You're not making any sense.CFCF 💌 📧 23:16, 22 November 2015 (UTC)
Keep insulting other editors and we will see if being blocked helps you to comprehend Wikipedia's civility policy. --Guy Macon (talk) 02:38, 23 November 2015 (UTC)
Some more sources pointing to the low quality of the popular press: PMID 25761887 , PMID 24675630 , PMID 23219265 , PMID 15726859 CFCF 💌 📧 19:41, 22 November 2015 (UTC)
A really good one PMID 17379907 : "How doctors can get behind the headlines" Ben Goldacre. CFCF 💌 📧 19:56, 22 November 2015 (UTC)

Few things can make a doctor's heart sink more in clinic than a patient brandishing a newspaper clipping. Alongside the best efforts to empower patients, misleading information conveyed with hyperbole is paradoxically disempowering; and it's fair to say that the media don't have an absolutely brilliant track record in faithfully reporting medical news.

Comment: It is not standard practice to add Template:Citation needed and the like to our policy and guideline pages (or our essay pages, for that matter); whenever I see such tags on such pages, I think it's silly. That stated, if we are going to add citations to our policies and guidelines (or essays), which we obviously sometimes do, it makes sense that someone would want citations for certain other parts of them. But our policies and guidelines (and essays) are not Wikipedia articles, and should generally not be treated as such. Flyer22 Reborn (talk) 19:48, 22 November 2015 (UTC)

Totally agree. Only the encyclopedia (ergo articles) require citations, and many policies and guidelines apply only to them. The rest is still governed by (unsourced) consensus. -- BullRangifer (talk) 23:56, 22 November 2015 (UTC)
User:CFCF, does this mean that you mean that you would accept adding to the Project Page the conclusion of this article, which you cited? This is my main argument.
https://www.ncbi.nlm.nih.gov/pubmed/23219265?dopt=Abstract
Analysis of health stories in daily newspapers in the UK.
CONCLUSIONS: There are significant differences in the quality of reporting within and between major daily UK newspapers, with anonymous articles being the poorest quality, and widespread reliance on press releases from the major UK scientific journals.
--Nbauman (talk) 21:13, 22 November 2015 (UTC)
If you read beyond the abstract you will find that the paper is critical of newspaper coverage and though it gives suggestions on how to improve news coverage – it never says that their quality is above what we find in review articles. Review articles being what is preferred on Wikipedia. CFCF 💌 📧 21:18, 22 November 2015 (UTC)
I read the full article.
Robinson et al. never says that the quality of news sources is above review articles, and it never says that their quality is below review articles. It doesn't address that question at all.
Once again, would you object to including their conclusion in the project page?
There are significant differences in the quality of reporting within and between major daily UK newspapers, with anonymous articles being the poorest quality, and widespread reliance on press releases from the major UK scientific journals.
--Nbauman (talk) 22:52, 22 November 2015 (UTC)
Of course I would – it's irrelevant to the guideline as we don't allow newspaper sources anyway. CFCF 💌 📧 23:00, 22 November 2015 (UTC)
We do allow news sources. We allow lay summaries. If we allow lay summaries, we should give editors the important advice that some news sources have higher quality than others. For example, this study found that The Times had a higher quality assessment than the other newspapers.
And it is relevant to the guideline because the guideline uses weasel words like "generally" and "most," which aren't supported by any sources. Here is an actual description of what newspapers really do, using scientific methods and published in a peer-reviewed journal, to support (or qualify) that claim. --Nbauman (talk) 23:14, 22 November 2015 (UTC)
Once again I suggest you read the guideline of which talk page you are posting – MEDRS specifically calls out newspapers and lay press as one of the types of sources not to be used regardless if some are better than others. None of the newspapers in the study received a perfect score, and the articles were essentially summaries of the original articles – which strengthens the case for foregoing popular press coverage for sourcing in Wikipedia articles. CFCF 💌 📧 23:24, 22 November 2015 (UTC)
Nbauman, how many times do we need to remind you that only articles need RS? Many policies and guidelines only apply to articles. Weasel words are allowed elsewhere, based simply on common sense. Unsourced consensus governs the wording of our policies and guidelines, which is one good reason for newbies to stay away from them. They should gain more experience before seeking to make radical changes, and then do it by seeking consensus. I recognize that you are not a newbie, so just seek consensus, and know when to drop the stick. That you seek to apply sourcing requirements to PAG does make you appear to be a newbie. You should know better. -- BullRangifer (talk) 00:03, 23 November 2015 (UTC)
BullRangifer and CFCF, I understand the rules for guidelines.
My question is, do we allow guidelines that are based on provably false assumptions?
If a guideline says that all newspapers are inaccurate, and there is high-quality evidence from multiple peer-reviewed journals which reports that some newspapers are accurate, do you believe we can continue to use that provably false statement in the guideline?
In my understanding, your answer is yes. You believe that we can use provably false statements in the guidelines.
The mere fact that a guideline is proven false is not sufficient reason to change it.
Right? --Nbauman (talk) 00:12, 23 November 2015 (UTC)
BullRangifer is wrong. The rules for policy pages are different from the rules for articles, but that does not mean that policy pages have no rules. I cannot, for example, go to our policy on edit warring and say "clinical studies have conclusively proven that edit warring on Wikipedia increases your chance of getting cancer by a factor of twenty" and expect the statement to stay in the article because, in the words of BullRangifer. "only articles need RS". No, I would have to provide a reliable source for that claim. Many things in policy pages do not need sources, but any factual claims, if challenged, need reliable sources to back up the claims. --Guy Macon (talk) 02:50, 23 November 2015 (UTC)

It's actually pretty easy to source the general unreliability of medical news reporting. (Mixture of primary and secondary sources, no particular order.)

  • "Studies have persistently shown deficiencies in medical reporting by the mainstream media." [2]
  • "Media coverage of medical research often fails to provide the information needed for the public to understand the findings and to decide whether to believe them." [3]
  • "A number of recent studies have pointed to the poor and variable quality of many health stories in the mainstream media...Some outlets are capable of producing excellent stories, but common flaws across all media include lack of attention to the quality of the research evidence, exaggerated estimates of benefits...[etc]" [4]
  • "[Medical information in the mass media] should be valid, but is often criticized for being speculative, inaccurate and misleading." [5]
  • "...many health care news stories by U.S. news media emphasize or exaggerate the potential benefits of interventions while minimizing or ignoring the potential harms." [6]
  • "Australian lay news reporting of medical advances, particularly by the online news services, is poor." [7]
  • "Whatever topics the journalists are presented with, they will select stories in accordance with news values which are shaped primarily by media and political agendas." [8]
  • "Newspapers preferentially cover medical research with weaker methodology." [9]

That might be more than is necessary to make the point, but the list was longer to start with. :-) Note this doesn't address any of the causes of this situation, and several of the sources point out that there are multiple reasons for it. Sunrise (talk) 04:13, 23 November 2015 (UTC)

I looked at your first reference, and when I found that it doesn't support the claim we are discussing, I didn't bother with the rest. It said:
"Broadsheet newspapers had the highest average satisfactory scores: 58% (95% CI 56–60%), compared with tabloid newspapers and online news outlets, 48% (95% CI 44–52) and 48% (95% CI 46–50) respectively. The lowest scores were assigned to stories broadcast by human interest/current affairs television programmes (average score 33% (95% CI 28–38)).".
That does not support the claim ""most medical news articles fail to discuss important issues such as evidence quality", Change the claim to "many medical news articles" or to something like "medical news articles are generally poor at discussing issues such as evidence quality" and I could accept that first source as supporting the claim, but 43% or even 52% is not "most". Feel free to cut down your list to sources that actually support the claim and I will be glad to look at the rest. --Guy Macon (talk) 05:05, 23 November 2015 (UTC)
No, that single paragraph taken out of context from the larger article doesn't necessarily support the claim, but it doesn't contradict it either. You have enough sources, so go ahead and start an RfC if you want MEDRS to allow popular press. Good luck! :) CFCF 💌 📧 13:09, 23 November 2015 (UTC)
That interpretation is actually a misreading of the source. The metric being used is "proportion of the criteria met by each newspaper," not "proportion of newspapers meeting the criteria." An average score of 58% (broadsheet newspapers, the highest group) means that the average newspaper was rated "satisfactory" on 58% of the criteria and "unsatisfactory" on the remaining 42%. The authors' own conclusion from this data is that they found a "modest improvement [over time]" but "the overall quality of medical reporting in the general media remains poor."
I'm not trying to support a specific statement, merely establishing that medical reporting in popular media is generally of low quality. That seems to be a key part of the underlying dispute, based on the above section. The quotes I provided are examples that could be used to support statements of this type. For the statement you mentioned, I'd be fine with your "generally poor at discussing" wording, or with something like replacing "fail to discuss" with "fail to adequately discuss." Sunrise (talk) 02:00, 24 November 2015 (UTC)
These articles, taken together, give an interesting, useful and well-balanced picture of the newspaper coverage of medicine.
Unfortunately, the editors here are selectively quoting them, misquoting them, and inaccurately summarizing them to justify their simplistic anti-newspaper bias.
It doesn't make any difference here what the peer-reviewed journals say.
BullRangifer and CFCF have already said that they don't believe that it's necessary to provide any supporting evidence for the guidelines. Their own personal opinions and anti-media prejudices are enough.
They don't even support changing the WP:MEDRS guidelines even if the consensus of the peer-reviewed literature is that factual statements in the guidelines are wrong.
So after you get through reading their 20 or 30 citations, and find out that they actually agree with you, they will say, "Well, we don't care what the peer-reviewed literature says anyway."
The irony is that here are people who claim that we must use peer-reviewed sources, not newspapers, because peer-reviewed sources are more accurate. But they themselves don't accept peer-reviewed sources for MEDRS. What do they follow? Besides their own biases and prejudices?
"Whatever topics the journalists are presented with, they will select stories in accordance with news values which are shaped primarily by media and political agendas."
Indeed. That's what Wikipedia editors are doing right now. --Nbauman (talk) 14:03, 23 November 2015 (UTC)
Once again, everything you said is wrong - and you just can't meet that with debate. It is time you WP:Dropped the stick and realized that consensus supports the current wording. If you believe the sources support your interpretation you can start an RfC about it and we'll see what the community thinks. CFCF 💌 📧 19:27, 23 November 2015 (UTC)
I wrote that the peer-reviewed journal article, Robinson, "Analysis of health stories in daily newspapers in the UK," concluded, "There are significant differences in the quality of reporting within and between major daily UK newspapers, with anonymous articles being the poorest quality, and widespread reliance on press releases from the major UK scientific journals." It is verifiably, irrefutably true that Robinson wrote that as his conclusion.
You are substituting hyperbole for logic. --Nbauman (talk) 05:36, 24 November 2015 (UTC)
That was not what I was objecting to, I objected to everything in your latest post - all of which is either very questionable or entirely false. CFCF 💌 📧 15:38, 24 November 2015 (UTC)

Answer

In answer to the question posed in the section heading: "Yes." Read WP:NOTPART for the policy. You must still comply with legal and behavioral policies, with the spam blacklist (which is enforced in software anyway), and with the BLP rules against uncited controversial information, but that's about it. If editors choose to write that the sky is a mauvy shade of pinky russet at noon on Earth, then no citations are required. WhatamIdoing (talk) 23:34, 25 November 2015 (UTC)

Propose changes

I do not like either version being presented in the recent dispute.

current

Most medical news articles fail to discuss important issues such as evidence quality,[1] costs, and risks versus benefits,[2] and news articles too often convey wrong or misleading information about health care.[3]

  1. ^ Cooper, B. E. J.; Lee, W. E.; Goldacre, B. M.; Sanders, T. A. B. (May 2011). "The quality of the evidence for dietary advice given in UK national newspapers". Public Understanding of Science. 21 (6): 664–673. doi:10.1177/0963662511401782. PMID 23832153. S2CID 36916068. {{cite journal}}: Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  2. ^ Schwitzer G (2008). "How do US journalists cover treatments, tests, products, and procedures? an evaluation of 500 stories". PLOS Med. 5 (5): e95. doi:10.1371/journal.pmed.0050095. PMC 2689661. PMID 18507496. {{cite journal}}: Unknown parameter |lay-date= ignored (help); Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  3. ^ Dentzer S (2009). "Communicating medical news—pitfalls of health care journalism". N Engl J Med. 360 (1): 1–3. doi:10.1056/NEJMp0805753. PMID 19118299.
proposed change

Most medical news articles fail to discuss important issues such as evidence quality, costs, and risks versus benefits, and news articles too often convey wrong or misleading information about health care.[1]

  1. ^ See for example -

The cited sources do not back the assertions made. Citations should be bundled with an explanation now. Other discussion can determine appropriateness of the statement and citations. Blue Rasberry (talk) 20:12, 23 November 2015 (UTC)

The main problem with that quote is that it takes research about a subset of medical news articles and generalizes to all medical news articles.
A more accurate statement is Robinson's conclusion in "Analysis of health stories in daily newspapers in the UK," that "There are significant differences in the quality of reporting within and between major daily UK newspapers, with anonymous articles being the poorest quality, and widespread reliance on press releases from the major UK scientific journals." This has been repeatedly confirmed in other studies. That's how I want it to start.
As the BMJ repeatedly reminds us, a study only applies to the population it sampled. Cooper and Schwitzer only sampled certain popular newspapers or news sources.
If they had examined MedPage Today or Reuters Health, they would have come to a completely different conclusion. MedPage Today and Reuters train their reporters to follow the HealthNewsReviews checklists, and most of their stories do discuss evidence quality, costs, and risks versus benefits.
I don't think you would include Consumer Reports among the medical news articles that fail to discuss evidence quality, costs, and risks versus benefits, would you?
I feel like Galileo here. If you looked at the medical news stories themselves, you would see that the quote isn't true. And the peer-reviewed journals back me up. --Nbauman (talk) 05:46, 24 November 2015 (UTC)
P.S. Schwitzer said, "Reporters and writers have been receptive to the feedback; editors and managers must be reached if change is to occur." So his 2008 study may not apply in 2015. --Nbauman (talk) 05:53, 24 November 2015 (UTC)
Not true, and we could just add PMID 17379907 : "How doctors can get behind the headlines" Ben Goldacre and we don't need to change anything. CFCF 💌 📧 15:38, 24 November 2015 (UTC)

Most medical news articles fail to discuss important issues such as evidence quality, costs, and risks versus benefits, and news articles too often convey wrong or misleading information about health care.[1]

  1. ^ See for example -
User:CFCF, what is not true? Do you agree that Schwitzer wrote, "Reporters and writers have been receptive to the feedback; editors and managers must be reached if change is to occur"? --Nbauman (talk) 16:18, 24 November 2015 (UTC)

This Is Why You Have No Business Challenging Scientific Experts

This is why we prefer reviews, rather than using the latest research in the form of single studies, no matter how well-done that research may be:

... there's something very special about being a member of an expert, scientific community, which cannot be duplicated by people like vaccine critic Jenny McCarthy,...

Read all the online stuff you want, Collins argues—or even read the professional scientific literature from the perspective of an outsider or amateur. You'll absorb a lot of information, but you'll still never have what he terms "interactional expertise," which is the sort of expertise developed by getting to know a community of scientists intimately, and getting a feeling for what they think.

"If you get your information only from the journals, you can't tell whether a paper is being taken seriously by the scientific community or not," says Collins. "You cannot get a good picture of what is going on in science from the literature," he continues. And of course, biased and ideological internet commentaries on that literature are more dangerous still. (emphasis added)

Even individual medical professionals can get it wrong if they don't wait for reviews, so why should Wikipedia's editors expect to do better? That's why MEDRS is so important. We must demand reviews most of the time. -- BullRangifer (talk) 04:28, 20 November 2015 (UTC)

I think Chris Mooney is always worth reading.
However, in many cases, there are no reviews of a particular approach to a particular condition. When I read a case history in the NEJM, they will often say that there are no randomized, controlled trials (much less review articles of RCTs) to give guidance on how to treat the condition that is before them. This is common for example in rheumatological diseases. They might only find reports of 5 other patients in the entire literature.
This comes up regularly in Wikipedia for a promising or controversial new drug. There are no reviews of the experience in humans because there is no experience in humans outside of 1 or 2 phase III trials. You may have to wait years for a review. Should Wikipedia ignore a new drug for years?
(Another problem is that the review article is often written by an author with financial ties to a drug company.)
So yes, review articles are best. But we don't always have them for important subjects. --Nbauman (talk) 16:28, 21 November 2015 (UTC)
To coatrack this on to the discussion of journalistic sources -- you will notice that some journalists will always interview the author of a study, and/or an independent expert in the field. That way the journalist can make sure he understands the study correctly. A Wikipedia editor does not. So a Wikipedia editor can read a study, even a review article, and misunderstand it. This is why news stories can be so useful. They explain in simple language, quoting the author, what the study was actually saying.
I remember a WP entry where the editor wrote that a study found that a drug improved the outcomes in heart failure. What the study actually said was that the drug lowered cholesterol, which is a secondary outcome, and not the same thing.
So yes, peer-reviewed journal articles are usually more accurate reports of scientific studies than news stories. However, a good news story (that follows the HealthNewsReviews guidelines, for example) can be more accurate than a Wikipedia editor's summary of that journal article. --Nbauman (talk) 16:45, 21 November 2015 (UTC)
Nbauman – I suggest you actually read the guideline. MEDRS allows for primary sources for very rare diseases or where there is otherwise little research available and no reviews. That editors are not perfect in their judgement or when reading/summarizing articles is not a reason to allow them to badly read/summarize lower quality sources. CFCF 💌 📧 18:02, 21 November 2015 (UTC)
I agree w/ CFCF, and his assessment of Wikipedia:Identifying_reliable_sources_(medicine)--Ozzie10aaaa (talk) 21:58, 21 November 2015 (UTC)
MEDRS as written and MEDRS as enforced are somewhat different things.
Also, I want to echo what CFCF said: it's not just "rare diseases". It's many salvage therapies, many therapies for side effects, many approaches to co-morbidities, etc. There are few recent review articles about pneumonia in a pregnant woman, even though neither pneumonia nor pregnancy are unusual conditions; there are none about what to do if that woman is allergic to the standard antibiotics. (Also, PMID 21888683 on critical illness in pregnant women ought to be cited in many, many articles ...because it's the only recent review that could be used for a ==During pregnancy== section in most of those articles.) WhatamIdoing (talk) 23:56, 21 November 2015 (UTC)
Agree with this. Is there much point to this discussion? Johnbod (talk) 11:02, 22 November 2015 (UTC)
Many journalists not only read the journal papers, but also interview the major experts in the field to make sure they understand the papers, and go to scientific meetings. You can see that in major stories in the New York Times, Wall Street Journal, Science News & Comments, and many others. Many of those journalists have PhD and MD degrees.
Therefore Wikipedians who haven't talked to all those experts in the field, who may merely be scientists or doctors in other fields who have read the papers, have no business challenging major news stories by those experienced science news reporters.
Is that correct?--Nbauman (talk) 19:34, 22 November 2015 (UTC)
No, and I question your use of "many", when in fact you mean "few". It would be wrong for a Wikipedia editor to without source question such news articles, but that isn't the way Wikipedia works – Wikipedia uses sources, and our guidelines demand higher quality sources than the popular press. CFCF 💌 📧 19:45, 22 November 2015 (UTC)
I'm using the term "many" to mean "more than you think."
At the New York Times, Elisabeth Rosenthal https://en.wikipedia.org/wiki/Elisabeth_Rosenthal and Lawrence Altman http://www.nytimes.com/2015/03/31/health/parsing-ronald-reagans-words-for-early-signs-of-alzheimers.html have MD degrees. Gina Kolata has a masters in applied mathematics http://topics.nytimes.com/top/reference/timestopics/people/k/gina_kolata/index.html
At Science magazine, many of the staffers in the news department have PhDs. http://www.sciencemag.org/site/about/meet_editors.xhtml
I can't use these in the entry, because this is WP:OR, but there are surveys of the educational background of science reporters, which may or may not have been published in peer-reviewed journals, and when professional journalist associations survey their members, the educational level is quite high, including MDs and PhDs. I work with journalists who have MDs and PhDs. They don't usually go around bragging about it. --Nbauman (talk) 22:19, 22 November 2015 (UTC)
I had stopped watching this talk page closely, but when I saw the section name, I thought that it was an argument against MEDRS, that editors who cite MEDRS are so cruel to the nice editors who just want to add fringe material about medical topics. Respect mah authoritah! --Tryptofish (talk) 22:31, 22 November 2015 (UTC)
I have been half-watching these pointless burgeoning exchanges and wondering what was up. Fringe-advocacy? I don't think so; more advocacy for the worth of journalism I think (so Respect mah profession). Nothing wrong with journalism of course, and I'm sure some medical journalism is very fine. But enough of it plainly isn't that our current guideline seems about right to me. Alexbrn (talk) 06:41, 24 November 2015 (UTC)
User:Alexbrn, you are obviously not a scientist, because a scientist would read the peer-reviewed literature to find out what the data is and what conclusions researchers are coming to about that data. The fact is that the guideline contradicts the peer-reviewed literature. Instead, it's based on bias and personal opinion. ("I'm a doctor, and none of you uneducated rabble has any right to write about medicine. I read a newspaper once and it was all wrong.")
Is it right to you that the guideline makes unsupported statements that are contradicted by the peer-reviewed medical literature? That's what the argument is about. --Nbauman (talk) 16:31, 24 November 2015 (UTC)
In guidelines and articles, we try to get it right. In the parts of articles which make medical claims, we follow MEDRS and prefer reviews. In a guideline, we use common sense. If medical "reviews" clearly contradicted some wording in a guideline, common sense, after discussion and consensus, would usually indicate we'd change the wording in the guideline, but guidelines don't usually make medical claims.
If wordings in a guideline were "contradicted by the peer-reviewed medical literature", we wouldn't care at all. That would be OR; anyone can find peer-reviewed research which contradicts just about anything. We would only give it consideration if it was a clear review which contradicted our wording. -- BullRangifer (talk) 07:47, 25 November 2015 (UTC)
What do you mean by "common sense"?
All you did is read some newspapers, find articles you didn't like, and drew conclusions about all of medical journalism from that.
Isn't that it? What else did you do to draw those conclusions? --Nbauman (talk) 17:13, 25 November 2015 (UTC)
I mean what the expression normally means. The rest of what you write must be addressed to some other editor because I haven't mentioned those things. -- BullRangifer (talk) 04:15, 26 November 2015 (UTC)
Nbauman, in my experience, actual scientists don't read the formal literature about science in the media. Instead, they ask their colleagues about their personal experiences with the media, they discover that anyone who's been interviewed more than a couple of times has had at least one bad experience, and adopt, upon the recommendation of their lab mates, a strict policy of "written interviews only" to prevent misquotations. It might all be based on their "bias and personal opinion", but that seems quite common (in the US). WhatamIdoing (talk) 23:44, 25 November 2015 (UTC)
Yes, that's the way some people form conclusions. There's one problem with that.
There were scientists who asked their colleagues about their personal experiences with hiring women, and discovered that anyone who hired a woman had a bad experience. They adopted a strict policy of "no women in the lab" to prevent more bad experiences. (You can replace "women" with any other minority.)
The problem with that is that it's wrong. It leads to false conclusions. It results in bias and prejudice.
That's my complaint. The section "Popular press" is full of bias and prejudice, and we should change it to more closely follow the facts. The other editors are arguing that we should keep the bias and prejudice, and ignore the facts. Do you see anything wrong with that? I'm having trouble following that logic. --Nbauman (talk) 02:47, 26 November 2015 (UTC)

Proposed change 2

Okay, so lets narrow this down. Current guideline text:

The popular press is generally not a reliable source for scientific and medical information in articles. Most medical news articles fail to discuss important issues such as evidence quality,[1] costs, and risks versus benefits,[2] and news articles too often convey wrong or misleading information about health care.[3] Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease. Newspapers and magazines may also publish articles about scientific results before those results have been published in a peer-reviewed journal or reproduced by other experimenters. Such articles may be based uncritically on a press release, which can be a biased source even when issued by an academic medical center.[4] News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms. For Wikipedia's purposes, articles in the popular press are generally considered independent, primary sources.

A news article should therefore not be used as a sole source for a medical fact or figure. Editors are encouraged to seek out the scholarly research behind the news story. One possibility is to cite a higher-quality source along with a more-accessible popular source, for example, with the |laysummary= parameter of {{cite journal}}.

Conversely, the high-quality popular press can be a good source for social, biographical, current-affairs, financial, and historical information in a medical article. For example, popular science magazines such as New Scientist and Scientific American are not peer reviewed, but sometimes feature articles that explain medical subjects in plain English. As the quality of press coverage of medicine ranges from excellent to irresponsible, use common sense, and see how well the source fits the verifiability policy and general reliable sources guidelines. Sources for evaluating health-care media coverage include the review websites Behind the Headlines, Health News Review,[10] and Media Doctor, along with specialized academic journals, such as the Journal of Health Communication; reviews can also appear in the American Journal of Public Health, the Columbia Journalism Review, the Bad Science column in The Guardian, and others. Health News Review's criteria for rating news stories[5] can help to get a general idea of the quality of a medical news article.

I propose the following minor change that would not change the interpretation and hopefully appeases those who have been complaining here:

The popular press is generally not a reliable source for scientific and medical information in articles. Often enough medical news articles fail to discuss important issues such as evidence quality, costs, and risks versus benefits, and news articles too often convey wrong or misleading information about health care.[6][7][8] Articles in newspapers and popular magazines often lack the context to judge experimental results, tending to overemphasize the certainty of any result — presenting a new and experimental treatment as "the cure" or an every-day substance as "the cause" of disease. Newspapers and magazines may also publish articles about scientific results before those results have been published in a peer-reviewed journal or reproduced by other experimenters. Such articles may be based uncritically on a press release, which can be a biased source even when issued by an academic medical center.[9] News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms. For Wikipedia's purposes, articles in the popular press are generally considered independent, primary sources.

A news article should therefore not be used as a sole source for a medical fact or figure. Editors are encouraged to seek out the scholarly research behind the news story. One possibility is to cite a higher-quality source along with a more-accessible popular source, for example, with the |laysummary= parameter of {{cite journal}}.

Conversely, the high-quality popular press can be a good source for social, biographical, current-affairs, financial, and historical information in a medical article. For example, popular science magazines such as New Scientist and Scientific American are not peer reviewed, but sometimes feature articles that explain medical subjects in plain English. As the quality of press coverage of medicine ranges from excellent to irresponsible, use common sense, and see how well the source fits the verifiability policy and general reliable sources guidelines. Sources for evaluating health-care media coverage include the review websites Behind the Headlines, Health News Review,[11] and Media Doctor, along with specialized academic journals, such as the Journal of Health Communication; reviews can also appear in the American Journal of Public Health, the Columbia Journalism Review, the Bad Science column in The Guardian, and others. Health News Review's criteria for rating news stories[10] can help to get a general idea of the quality of a medical news article.

References

References

  1. ^ Cooper, B. E. J.; Lee, W. E.; Goldacre, B. M.; Sanders, T. A. B. (May 2011). "The quality of the evidence for dietary advice given in UK national newspapers". Public Understanding of Science. 21 (6): 664–673. doi:10.1177/0963662511401782. PMID 23832153. S2CID 36916068. {{cite journal}}: Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  2. ^ Schwitzer G (2008). "How do US journalists cover treatments, tests, products, and procedures? an evaluation of 500 stories". PLOS Med. 5 (5): e95. doi:10.1371/journal.pmed.0050095. PMC 2689661. PMID 18507496. {{cite journal}}: Unknown parameter |lay-date= ignored (help); Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  3. ^ Dentzer S (2009). "Communicating medical news—pitfalls of health care journalism". N Engl J Med. 360 (1): 1–3. doi:10.1056/NEJMp0805753. PMID 19118299.
  4. ^ Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ (2009). "Press releases by academic medical centers: not so academic?". Ann Intern Med. 150 (9): 613–8. doi:10.7326/0003-4819-150-9-200905050-00007. PMID 19414840. S2CID 25254318.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  5. ^ "How we rate stories". Health News Review. 2008. Archived from the original on 2012-07-23. Retrieved 2009-03-26.
  6. ^ Cooper, B. E. J.; Lee, W. E.; Goldacre, B. M.; Sanders, T. A. B. (May 2011). "The quality of the evidence for dietary advice given in UK national newspapers". Public Understanding of Science. 21 (6): 664–673. doi:10.1177/0963662511401782. PMID 23832153. S2CID 36916068. {{cite journal}}: Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  7. ^ Schwitzer G (2008). "How do US journalists cover treatments, tests, products, and procedures? an evaluation of 500 stories". PLOS Med. 5 (5): e95. doi:10.1371/journal.pmed.0050095. PMC 2689661. PMID 18507496. {{cite journal}}: Unknown parameter |lay-date= ignored (help); Unknown parameter |lay-source= ignored (help); Unknown parameter |laysummary= ignored (help)
  8. ^ Dentzer S (2009). "Communicating medical news—pitfalls of health care journalism". N Engl J Med. 360 (1): 1–3. doi:10.1056/NEJMp0805753. PMID 19118299.
  9. ^ Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ (2009). "Press releases by academic medical centers: not so academic?". Ann Intern Med. 150 (9): 613–8. doi:10.7326/0003-4819-150-9-200905050-00007. PMID 19414840. S2CID 25254318.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. ^ "How we rate stories". Health News Review. 2008. Archived from the original on 2012-07-23. Retrieved 2009-03-26.
CFCF 💌 📧 12:22, 26 November 2015 (UTC)
User:CFCF, once again, that's a misreading of the sources. I wonder whether you read every one of those articles, because some of them say the opposite of your proposal.
I'd like to know what text, if any, in the individual articles supports the statement:
"The popular press is generally not a reliable source for scientific and medical information in articles."
If you can't cite the literature as evidence for your conclusions, I'd like to know what that evidence is. Is it just your own personal reading of some newspapers? Is it just your own personal prejudice against newspapers?
Statements like "The popular press is generally not a reliable source for scientific and medical information in articles" have as much validity as "Women generally are not good scientists." That is, they are both personal prejudices unsupported by facts.
The truth, as reported eg by Robinson, is:
"There are significant differences in the quality of reporting within and between major daily UK newspapers, with anonymous articles being the poorest quality, and widespread reliance on press releases from the major UK scientific journals."
You can't reject that conclusion using published evidence. So what are you using to reject it? --Nbauman (talk) 16:18, 26 November 2015 (UTC)

(edit conflict)

Neither "most...fail" nor "often enough...fail" capture it correctly. It comes down to the "reputation for fact-checking and accuracy" requirement of wp:RS. In the context of science coverage the reputation is for the very reverse, as discussed at length above. Some may consider that reputation to be unjustified, unconstitutional, discriminatory, or various other kinds of wrong, but it nonetheless is the reputation. That requirement does not mean lower-reputation sources cannot be mentioned, but rather that they cannot be the basis for assertions. In any case, WP is wp:NOTNEWS, so we really should not need to use such sources. We can wait until a source appears that we agree is reliable. If that never happens, we can take that silence as meaning something: the expert community doesn't think the "news" was significant. LeadSongDog come howl! 16:54, 26 November 2015 (UTC)
The reputation among whom? Among a handful of Wikipedians? Or among hundreds of people who have examined the news media in a systematic way, and published their results in verifiable peer-reviewed journals?
Where does that "reputation" come from, as you use the term? The "reputation," as discussed above, is that the quality of popular journalism varies. (In fact, when researchers actually looked at the facts, they found that the quality of peer-reviewed journals also varies, and some of them were less accurate than the most accurate popular press.) How do you distinguish between "reputation" and your own personal opinion? Or are they the same?--Nbauman (talk) 17:45, 26 November 2015 (UTC)
You said it yourself, "the quality of popular journalism varies". We cannot, as pseudonymous editors, winkle out on our own which ones to trust. We must distrust them all. Peer-reviewed journals, on the other hand, come with built-in metrics of article reliability, to which we need only pay attention. Citations are the real test: whether independent secondary (review) articles in other journals are published citing the primary work. Our goal should not be to include everything we can find published, just the best. Even that is more than we can stay on top of, why waste our time on the questionable sources? LeadSongDog come howl! 18:34, 26 November 2015 (UTC)
I don't think that we're writing this section for the purpose of providing information about science journalism. I think we're writing it to change editors' choices for sources. Overall, it seems to have been fairly effective at getting editors to choose better sources. We have many fewer disputes now about whether a human-interest story from Local News is a good source in an article. There have been some costs, e.g., people rejecting a news source that could have been used for their intended purpose, but overall I think it's been working for us.
Are there some specific examples of sources that are suitable for use, but which are being rejected? Maybe if we could see what practical effect is wanted, then we could find a way to achieve that. WhatamIdoing (talk) 21:26, 26 November 2015 (UTC)
User:WhatamIdoing, I'm not sure what's being rejected. But here are several articles whose subject includes clinical medicine.
But here's an article from the NYT F.D.A. Cites Unapproved Device in Theranos Review, and Faults Handling of Complaints about the safety and effectiveness of a test. It's clinical medicine, not "social, biographical, current-affairs, financial, and historical information." It wasn't reported in the peer-reviewed journals. It wasn't simply copying information from the FDA reports; the reporter got response from the company and from independent experts as well. I got this from HealthNewsReviews, which has more examples.
There are the stories in the New Yorker by Atul Gawande, for example this story The Bell Curve, which starts with cystic fibrosis, and uses cystic fibrosis as an example of the variation in outcomes among institutions.
Or this series in the New York Times Paying Till It Hurts, by Elisabeth Rosenthal, who is an MD, and who described the problem in great clinical detail. --Nbauman (talk) 02:27, 27 November 2015 (UTC)

What do other people think? If I wrote a sentence saying that Theranos had an unapproved device, and cited a reputable newsletter, would you object on MEDRS grounds? What if I cite Gawande on the question of inter-institutional quality issues? Or a reputable newsletter on how much a treatment costs in the US? WhatamIdoing (talk) 02:39, 27 November 2015 (UTC)

This is unrelated to MEDRS. It's a controversy type piece of information governed by RS, not MEDRS. Go for it. -- BullRangifer (talk) 06:11, 27 November 2015 (UTC)
User:BullRangifer, the Theranos story is about the effectiveness of a medical test, which has not published peer-reviewed studies of its effectiveness. Why is this unrelated to MEDRS?
What about this NYT story on progesterone and pregnancy? Progesterone May Not Help Women With History of Miscarriages, Study Finds? Is that also unrelated to MEDRS? Would you accept that as a source in WP?
If you accept the Theranos story, but not the progesterone story, why do you accept one and not the other? --Nbauman (talk) 11:19, 27 November 2015 (UTC)
The Theranos story has potential to involve MEDRS, but this part you mention deals with the public criticism and controversy, and that aspect is governed by RS, not MEDRS. -- BullRangifer (talk) 14:29, 27 November 2015 (UTC)
The Theranos story deals directly with the efficacy and safety of a medical product. That's clinical medicine. That's the part I'm discussing:
But recently questions have been raised about how well the company’s testing technology works and how much the company is actually using its own technology, as opposed to standard equipment. On Monday, at a conference hosted by the Cleveland Clinic, which has a partnership with Theranos, Ms. Holmes responded to critics by agreeing to publish data showing its tests are accurate. But she did not offer a specific timetable.
The F.D.A. inspected Theranos’s manufacturing and engineering operations from late August through mid-September. The inspections apparently followed F.D.A. approval of one of the company’s tests, for herpes simplex virus 1, over the summer. The inspections were related to the F.D.A.’s purview over medical devices and did not include the company’s testing laboratories, which are regulated by a different agency.
Among the problems cited in the reports was a specific instance involving the tiny tubes, with inspectors saying that a report about a possible difficulty in clearly seeing the clotting of blood was not recorded or investigated as a complaint.
“Complaints involving the possible failure of a device to meet any of its specifications were not reviewed, evaluated and investigated where necessary,” one of the reports said.
The Theranos story is the same kind of story (clinical medicine) as the progesterone story.
User:WhatamIdoing asked essentially for specific examples of reliable, accurate stories that would be rejected under the current version of WP:MEDRS Popular press. I gave you one. And I don't think there are any peer-reviewed journal articles on Theranos at all. If there are, they would be published by Theranos itself, and could be less reliable than the news story, in this case.
So the current wording of WP:MEDRS would require us to use less accurate peer-reviewed studies but not more accurate news stories. --Nbauman (talk) 15:46, 27 November 2015 (UTC)
Which is exactly why it would only be acceptable for the non-MEDRS requiring statements. CFCF 💌 📧 19:49, 27 November 2015 (UTC)
User talk:CFCF, according to the Wikipedia entry for Theranos:
In February 2015, a Journal of the AMA editorial noted that information about such technology had appeared in the mainstream press including The Wall Street Journal, Business Insider, San Francisco Business Times, Fortune, Forbes, Medscape, and Silicon Valley Business Journal, but not in the peer-reviewed biomedical literature.
(That article was by Ioannidis doi:10.1001/jama.2014.17662 )
So you're saying that that if the Wall Street Journal, Wired and other popular press were the only sources that reported the lack of efficacy and safety of Theranos, and Theranos published its own favorable articles in the peer-reviewed literature, we should report the favorable peer-reviewed studies, and ignore the lack of safety and efficacy reported in the WSJ and Wired.
Are you going to delete the references to safety and efficacy sourced to the WSJ and Wired from Theranos?--Nbauman (talk) 22:59, 27 November 2015 (UTC)
Huh? None of those would be reliable. For bio/medical content we use high-quality secondary sources, not just "peer-reviewed literature" - or stuff in the lay press. This whole discussion is circling a non-problem. Alexbrn (talk) 21:48, 30 November 2015 (UTC)
Did you read the Theranos article? --Nbauman (talk) 23:30, 30 November 2015 (UTC)
If "Theranos published its own favorable articles in the peer-reviewed literature" we wouldn't (or at least shouldn't) have used it, so what's the point of your question? Alexbrn (talk) 17:15, 1 December 2015 (UTC)
I'd like to know what kind of source you would accept us using in an article about Theranos. Many doctors have objected that it has not been proven effective or safe, as the WSJ and Wired reported. There are no review articles that mention it. Therefore, according to your logic, there are no acceptable sources at all. Therefore, we shouldn't have an article about Theranos. Or if we do have an article, the article shouldn't mention its effectiveness or safety, and simply discuss its business aspects and history. Is that your position? --Nbauman (talk) 18:59, 1 December 2015 (UTC)
Yes. We deal in accepted knowledge to get encyclopedic coverage, if there ain't any, we remain silent. Alexbrn (talk) 19:18, 1 December 2015 (UTC)
The Wall Street Journal is accepted knowledge. They are a WP:RS. They have a reputation for accuracy, they do thorough fact checking, and report the opinions of experts. They are often cited in the NEJM, JAMA, and other peer-reviewed medical journals. You said below that for individual articles we rule by consensus. So if the editors on Theranos reach a consensus that the Wall Street Journal is a reliable source for that article, you would accept that consensus. Is that right? --Nbauman (talk) 14:58, 2 December 2015 (UTC)
The WSJ does not represent a source of accepted biomedical knowledge since (as has been established above ad nauseam) mass media reporting in this area is notoriously hit-and-miss. Any assessment of reliability needs to be made in respect of some actual content. For any non-trivial biomedical claim the community consensus I'm sure would be that the WSJ is not MEDRS. For other stuff, it could well be a good RS. |Alexbrn]] (talk) 15:09, 2 December 2015 (UTC)
I'm sure that in many cases the community consensus would be that we could use the WSJ for medical claims, on a case by case basis. I think that the community consensus could be that we could use it for an important public issue, where there were no peer-reviewed articles on the subject, and where the WSJ was clearly reporting the subject reliably and accurately. For example, they could decide to use the WSJ as a source for the effectiveness and safety of Theranos.
In the real world of medicine, doctors do use the WSJ to support medical claims, and you can find many references to the WSJ in peer-reviewed medical journal articles.
User:Alexbrn, you are being more royal than the king. You are establishing a higher standard of evidence than the peer-reviewed journals themselves use. You are giving higher credibility to peer-reviewed journals, and lower credibility to newspapers, than Atul Gawande, who writes for both. Do you think you understand journal and news publication better than Gawande? --Nbauman (talk) 02:12, 3 December 2015 (UTC)
Well, I understand the difference between "peer-reviewed" and "secondary", concepts which you seem to elide. I just want us to operate broadly in line with the long established principles of MEDRS. We don't use lay press for significant biomedical claims, and generally prefer not to use primary sources either (even though they may be "peer-reviewed"). The fact we don't use the Daily Mail or the latest bleeding-edge research as the basis for medical/health information is a big contributing factor to the comparative success of our medical content. Maybe look at WP:WHYMEDRS for some background? Alexbrn (talk) 08:16, 3 December 2015 (UTC)

The WSJ is not a reliable source for medical matters. Yes sometimes they get it right but their are better sources. Doc James (talk · contribs · email) 08:20, 3 December 2015 (UTC)

User:Doc James, I know medical writers at the WSJ, I know people like Schwitzer who review news sources like the WSJ for their reliability and accuracy, and I know writers and editors (like Schwitzer) who are trying to make medical news more reliable and accurate. I would like to tell them why you believe the WSJ is not a reliable source in medical matters. What is your evidence to support that statement? --Nbauman (talk) 17:02, 3 December 2015 (UTC)
That would be the same WSJ as the one that runs things like this uncritical piece mentioning reiki for treating people with cancer (presumably the "healing that can only be accessed from the heart and through the spirit"), right? Don't see that kind of thing in Cochrane systematic reviews ...
... Or which carries this piece by ... Deepak Chopra, Dean Ornish, Rustum Roy and Andrew Weil (!!!!) which says "The latest scientific studies show that our bodies have a remarkable capacity to begin healing, and much more quickly than we had once realized, if we address the lifestyle factors that often cause these chronic diseases" - and to be clear the "chronic diseases" which our body can apparently self-heal include specifically "obesity, diabetes, heart disease, asthma and HIV/AIDS"? Really, you'd have us draw on this kind of codswallop in our medical articles? Alexbrn (talk) 17:45, 3 December 2015 (UTC)
If at work I used the WSJ as support for medical decision making it would not go over well. Doc James (talk · contribs · email) 17:58, 3 December 2015 (UTC)
User:Alexbrn, you made some good points. I have some good answers.
(1) There is a basic distinction in journalism between the news pages and the editorial pages. This Chopra et al. piece is labeled "Commentary," as it should be. This is particularly true in the WSJ, which had a reputation for accuracy in the news pages, and a reputation for right-wing propaganda in the editorial pages. The editorial page has gotten worse over the years, and outside of WP I've pointed out editorials in the WSJ that were scientifically inaccurate. So if I were to make a case for the accuracy or usefulness of the WSJ, I would only include the news sections (although the editorial page would be a good source of arguments for sometimes foolish positions). For that matter, the peer-reviewed journals have also had editorials and sometimes clinical articles by these same authors. I've read articles by Dean Ornish in JAMA, NEJM, and Annals of Internal Medicine. So if you're going to reject the WSJ editorial page because they run articles by Ornish, would you also reject JAMA, NEJM, and Annals?
(2) The Donna Karan article is more troubling, particularly since it doesn't follow the usual good journalistic practice of getting a critical expert comment, such as MedicalNewsReviews recommends. However, this is also a blog, not a news story. I've actually gotten into arguments with WSJ reporters over what I would call irresponsible medical information in blogs.
So when I say
The Wall Street Journal is accepted knowledge. They are a WP:RS. They have a reputation for accuracy, they do thorough fact checking, and report the opinions of experts. They are often cited in the NEJM, JAMA, and other peer-reviewed medical journals.
I would limit that statement to the news section, not the editorial page and blogs. Editorials and blogs wouldn't be accepted under WP:RS, except as sources for opinions, so those two examples could be excluded as factually accurate sources anyway. And a WSJ story could only be used in a medical article if there were consensus that it was accurate, or at least that it gave a useful presentation of one side of an argument.

User:Doc James, suppose you were working at the Cleveland Clinic, suppose they were planning to adopt the Theranos tecnology for use on your patients, and you saw that WSJ article about Theranos. Would you bring the WSJ article to the attention of your colleagues?
Suppose you were writing an article for JAMA, as Ioannidis did (Research: Is Biomedical Innovation Happening Outside the Peer-Reviewed Literature? doi:10.1001/jama.2014.17662) about the fact that doctors are adopting medical technology like Theranos without its safety and efficacy being demonstrated in the peer-reviewed literature. The point is that there is no high-quality peer-reviewed literature about Theranos. The only published information available is from sources like the WSJ. Would you use the WSJ as a source, as Ioannidis did? Or would you put off writing the article, until peer-reviewed articles were published, if ever?
Suppose you were a reviewer for JAMA, and you read Ioannidis' manuscript. Would you insist that he delete all the news media sources? Or would you reject Ioannidis' manuscript entirely, since there were no peer-reviewed sources for him to use? --Nbauman (talk) 23:48, 3 December 2015 (UTC)
Yes that is a good point. If we were looking at Theranos we would look for evidence for the device on pubmed such as [12] etc
If that evidence was lacking than the WSJ may be appropriate. Much of what I am seeing on that page is business discusses rather than health discusses though and the popular press is okay for business.
We as a tertiary encyclopedia unlike the NEJM have different reference requirements. We are not trying to publish the new we are trying to cover the already established. Doc James (talk · contribs · email) 03:41, 4 December 2015 (UTC)
OK, User:Doc James, I'll ask my question again. I know medical writers at the WSJ, I know people like Schwitzer who review news sources like the WSJ for their reliability and accuracy, and I know writers and editors (like Schwitzer) who are trying to make medical news more reliable and accurate. I would like to tell them why you believe the news section of the WSJ is not a reliable source in medical matters. What is your evidence to support that statement? --Nbauman (talk) 12:09, 4 December 2015 (UTC)
Even if, hypothetically, the "news" portion of the WSJ was okay (and it's certainly at the better end, especially in comparison to British news), then it's simply not realistic to expect WPMED to maintain a "whitelist" of (portions of) newspapers which are okay for certain time periods when good people and policy prevail, and monitor for changes to personnel and editorial policy to keep the whitelist updated. Much better to align with accepted reputable classes of sources, since this is at least manageable (though even this is not easy). Alexbrn (talk) 12:37, 4 December 2015 (UTC)
There are better sources and we should be using them. Will the EB is generally reliable I would not cite it for medical matters either. Doc James (talk · contribs · email) 12:42, 4 December 2015 (UTC)
Yes, and even browsing around in the WSJ (another non-advantage is that it requires a paid subscription) it seems like a bit of a health minefield. Pretty soon I came across this article which uncritically relays the idea of William Davis that gluten in the diet is "the cause of health problems such as arthritis, hypertension and obesity". It also gives the "Clean Program" diet a soft ride, "detox", "depuff" and all. If we allowed sources like this for medical content it would be a bonanza for the POV-pushers (or at least lead to even longer Talk page arguments).
I agree that the diet article is irresponsible, and doesn't meet the standards of good medical journalism http://www.healthnewsreview.org/ . It's in the Life section, where the editorial standards are apparently much lower. This article was written by Jen Murphy, who is not a WSJ staff reporter but a regular columnist. As I mentioned, I've criticized WSJ medical stories before, and as I recall at least one of them was a freelance column, not a staff-written story. I could cop out by saying that it was an opinion piece, but the WSJ doesn't clearly label it as such. Murphy is the editor of a travel magazine, AFAR, and this is what can happen when a travel writer writes about health and medicine. None of that excuses this bad journalism. I will use this on a medical writer's list, as a good teaching example of what not to do. It's also a good example of how far the WSJ has fallen since Murdoch bought it in 2007.
Nonetheless, I still agree with Gawande that some news sources go through a more rigorous fact-checking process than some peer-reviewed journals, just as some women are better engineers than some men. It's no more accurate to say that "The popular press is generally not a reliable source" than it is to say, "Women are generally not good engineers." Aaron Carroll's column in the NYT is as reliable and accurate as his column in JAMA. And Carroll sometimes critiques review articles (as he did for Nina Teicholz' BMJ review article on dietary fat). Why not write, "Teicholz says X but Carroll in the NYT pointed out the following weaknesses in her study...."? You'd get a better perspective on the evidence.
The important point is that there is a variation in the quality of news reporting just as there is a variation in the quality of peer-reviewed journals, and the more accurate news sources are as reliable as peer-reviewed journals, according to published, peer-reviewed comparisons that use checklists. The simplest item, and most common flaw, on the checklist is distinguishing between association and causation. As the Murphy story shows, there is also variation in quality within a particular magazine.
I argue that good news stories have a role in Wikipedia. They are particularly useful when a reporter starts with a study (or even a review article) and asks outside experts to comment, where the experts often give good criticisms of the study's conclusion (like, association doesn't demonstrate causation). They might get Doc James to point out that the SPRINT conclusions are tentative and we should wait for the review articles. The NYT has many stories like that. http://www.nytimes.com/pages/health/index.html? The NYT used to have some Murphy-quality stories in its Well section, but I can't find any now. The NYT seems to be getting better as the WSJ is getting worse.
As for the whitelist problem, I think it would be relatively easy and fairly objective to rate medical news articles by the checklists that Schwitzer http://www.healthnewsreview.org/toolkit/tips-for-understanding-studies/ and that UK group used. You can "diagnose" an unreliable news story as objectively and accurately as you can diagnose lupus. Since my ultimate goal is medical education, I think it would be good to encourage WP editors to test news stories (and peer-review journal articles) by those criteria, and understand why a news story is reliable or not reliable. In my science education they taught me that following cookbook rules (like peer-reviewed journal articles yes, news stories no) was not good science; you have to understand the reasons for the rules. It's better to tell editors, "We're reverting your good-faith edit because it unfortunately doesn't follow these understandable rules for good journalism," than "We're reverting it because we don't use news stories."
I think the accuracy of news media is more complicated than "The popular press is generally not a reliable source," and the peer-reviewed literature backs me up. I think that WP:MEDRS should reflect the peer-reviewed literature, not a few N=1 examples. I really don't understand how people who insist on peer-reviewed journal articles are ignoring peer-reviewed journal articles on the accuracy of news media. --Nbauman (talk) 15:42, 4 December 2015 (UTC)

The problem is that this could at best be a minuscule improvement for a very small subset of articles, and at worst a massive amount of work/debate and detriment for the vast majority of articles. There is not simple way to judge news articles as qualitative, and many of our editors do not even have the capacity to do so. This means either we include none or all, and as far as I understand you're not supportive of either of those alternatives. CFCF 💌 📧 11:16, 5 December 2015 (UTC)

Stable Version

  • Version from 1 October 2015: "any biomedical information in articles"[13]
  • Version from 2 September 2015 (as edited by CFCF!): "any biomedical information in articles"[14]
  • Version from 7 July 2015: "the biomedical information in all types of articles"[15]
  • Version from 13 January 2015: "the biomedical information in all types of articles"[16]
  • Version from 4 January 2014: "the biomedical information in all types of articles"[17]
  • Version from 26 January 2013: "the biomedical information in all types of articles"[18]
  • Version from 24 January 2012: "the biomedical information in articles"[19]
  • Version from 1 January 2011: "the biomedical information in articles"[20]

CFCF edits changing "biomedical information" to "biomedical and health information": [21], [22], [23], [24], [25]

Related: Wikipedia:Administrators' noticeboard/Incidents#Disruptive editing on Wikipedia:Identifying reliable sources (medicine) by CFCF --Guy Macon (talk) 05:25, 2 November 2015 (UTC)

Guy Macon neglects to mention the other stable version I pointed to in the #Clarifying "biomedical" section above. You know, the stable version preceding this and this edit? I don't see Guy Macon supporting that stable version, but he sure is supporting the one he thinks fits his "WP:MEDRS doesn't apply to epidemiology" view. Further, even though there is already a thread about CFCF at WP:ANI (that's a WP:Permalink), Guy Macon has created another one there about him. Tsk, tsk. Flyer22 Reborn (talk) 11:38, 2 November 2015 (UTC)
Nonsense. I simply picked the first edit made in January of 2011, 2012, 2013, and 2014, and 2015, the first edit in July (mid year) of 2015, and the first edit made on the first day of the last three months. I correctly identified the consensus version that was stable for at least five years. CFCF announced[26] that he was changing the guideline to support his position in an ongoing discussion. --Guy Macon (talk) 18:15, 2 November 2015 (UTC)

Flyer22, in the two diffs you cite, the lead paragraph of the article said
"Wikipedia's articles are not intended to provide medical advice, but are important and widely used as a source of health information.[1] Therefore, it is vital that any biomedical information in articles be based on reliable, third-party, published secondary sources and accurately reflect current knowledge."
both before and after the edit, and you yourself had no problem with "the biomedical information in all types of articles".[27]
So how do a couple of diffs that don't change the lead paragraph in any way show evidence that the lead paragraph was anything other than the version that I have clearly shown to be stable for at least the last five years? --Guy Macon (talk) 20:12, 2 November 2015 (UTC)
I just stated this in the thread you started on CFCF: No nonsense. I showed that "health" was already at various parts of the guideline, and that this was also a stable part of the guideline. You, however, clearly do not support that stable version. And I am female, by the way (in case you didn't know). And I indeed had an issue with the "biomedical" change, which is why I stated, "If we are going to stress 'biomedical, then we should link to it, since, as seen at Talk:Domestic violence against men, editors commonly do not understand what biomedical entails." You were clearly one of the editors I was referring to. That change in text is also why I started this discussion. Flyer22 Reborn (talk) 23:06, 2 November 2015 (UTC)
I think you reverted too far back here it. QuackGuru (talk) 19:09, 2 November 2015 (UTC)
Reverting to a point before a discussion is the right thing to do. Changing a guideline during discussion has a great possibility of skewing the discussion. The first thing an editor commenting in the discussion should and most likely does is check the guideline, they may not always check the history, but probably should. If they find the guideline matches the preferred outcome it was changed to they will agree with that preferred outcome. AlbinoFerret 19:34, 2 November 2015 (UTC)

QuackGuru, as I have told you multiple times, if you believe that my edits violated any Wikipedia policy or guideline, report me at WP:ANI and receive your boomerang. Your recent behavior -- making accusations on various article talk pages where they are completely off-topic -- is becoming disruptive. Please stop. --Guy Macon (talk) 20:12, 2 November 2015 (UTC)

Biomedical information includes all health information. This edit was just clarifying the definition [28] Doc James (talk · contribs · email) 21:20, 2 November 2015 (UTC)

With all due respect, you cannot simply declare that the wording that has been in place for the last five years is wrong and that we should simply accept the new wording as "just clarifying the definition" when the question of which wording to use is being actively discussed with veteran editors on both sides of the issue. --Guy Macon (talk) 21:29, 2 November 2015 (UTC)
Agree that the wording change is significant enough and/or vague enough to warrant a full discussion. This is not just an issue of clarification.Dialectric (talk) 21:32, 2 November 2015 (UTC)
Yes, biomedical is quite clear, "clarifying" by adding 'health' is not useful. It is possible to endlessly argue what is or is not 'health information' or even worse 'health related information'. Biomedical is already quite clear both to specialists and laymen, at least those laymen who should be editing the type of articles this is intended to relate to. JbhTalk 21:49, 2 November 2015 (UTC)
(edit conflict) No, the issue is that the guideline has always been interpreted like this, but it wasn't until recently that some editors with ulterior motives started questioned the standard definition of biomedical that had been used here for a long time. As I pointed out it links to WP:BIOMEDICAL which definition to the lay-man includes health, epidemiology etc. There is no expansion of scope with this wording, it is only a clarification. With the link in the lede that defines biomedical we are actually not in a different position, except that readers and editors will be expected to go to an additional article to see the definition invoked by MEDRS. CFCF 💌 📧 22:04, 2 November 2015 (UTC)
I am having trouble reconciling your claim that "the guideline has always been interpreted like this" with the multiple editors who are telling you on this very page that the guideline has never been interpreted the way you interpret it. Why not just post a simple RfC presenting the two alternative wordings to the community and see how much support you really have? If the consensus is what you say it is the RfC will show it. I'm just saying. --Guy Macon (talk) 22:47, 2 November 2015 (UTC)
  • (edit conflict) Looking at the diff Doc James provided I notice that the words "treatment efficacy" have also been removed. This alone is reason enough to call an RfC because "treatment efficacy" indicates that the original scope of this was very limited. Going from "treatment efficacy" to "health" seems to be a huge broadening of the scope unless I am missing something, which is entirely possible considering the flux of this page. JbhTalk 22:00, 2 November 2015 (UTC)
    "Treatment efficacy" is a new thing. You should ignore it for the time being. When this page gets back to normal, then I'd be happy to see your thoughts on it. In the meantime, if you want more information about it, then you can read the section #The best evidence above. WhatamIdoing (talk) 05:11, 3 November 2015 (UTC)
    @WhatamIdoing: OK. Thank you for clearing that up. JbhTalk 13:26, 3 November 2015 (UTC)
With great respect, it is difficult to conceive that topics such as "health economics", "health logistics", effective placement & staffing of hospitals & clinics, public health communications & awareness campaign methods fall within the realm of biomedical information; or that it is necessary or productive of a quality encyclopedia to include them in the scope of WP:MEDRS. I believe that "health" is too broad a category for the application of this guideline. - Ryk72 'c.s.n.s.' 21:57, 2 November 2015 (UTC)
WP:BIOMEDICAL information by definition includes "health" information. User:Jbhunley, your edit made the definition ambiguous. User:Anthonyhcole's edit was correct. It is not broadening the definition. QuackGuru (talk) 22:07, 2 November 2015 (UTC)
WP:BIOMEDICAL is an essay and cannot reasonably be used to reword policy or guidelines without thorough discussion. Dialectric (talk) 22:12, 2 November 2015 (UTC)
"For this reason it is vital that any biomedical information is based on reliable, third-party, published secondary sources and that it accurately reflects current knowledge." We are linking to the essay. QuackGuru (talk) 22:17, 2 November 2015 (UTC)
(edit conflict) @QuackGuru: You say my edit made it ambiguous. Would you please say how it is ambiguous and what is ambiguous? What is something that you feel is covered under one version but ambiguous under the other? Claiming ambiguity without illustrating it does nothing to help me understand your position. JbhTalk 22:19, 2 November 2015 (UTC)
"Biomedical information" is a subset of "health information", not the other way around. User:Doc James, you got this one backwards. "Patient reports wearing a seat belt every time she drives a car" is "health information" (that primary care physicians in the US are encouraged to document), but there is nothing "biomedical" about wearing a seat belt. The "biomedical" bit starts when there's a car wreck, not when I put on my seatbelt before moving my car to the other side of an empty driveway (because it feels weird to be in the car and not be strapped in). WhatamIdoing (talk) 05:17, 3 November 2015 (UTC)
How something affects your health is definitely in the realm of "biomedical information". Heck, even the CDC has a fact sheet about seatbelts [29], and such is a valid MEDRS source for the effects of seatbelt use on your health. Regardless, requiring strict standards on the sourcing of such health claims seems to be in the spirit of MEDRS given use of the term "health information" in the preamble. Adrian[232] 21:56, 6 November 2015 (UTC)
Adrian, I agree that how something affects health should normally be covered information. The question is where to draw the line between covered and non-covered information. For example: The number of people who always use seat belts when riding in vehicles is inversely related to the number of health problems caused by car wrecks. However, is just the number of people who use seat belts when riding in vehicles (e.g., "Drivers in Ruritania used seatbelts during 80% of their trips last year" or "Children were restrained in an appropriate child safety seat during 95% of driving trips in passenger vehicles last year") something that should be covered? The bare number can reasonably be construed as "health information", as public health researchers are very interested in that number, but it cannot reasonably be construed as "biomedical information", as there is no "bio" there. WhatamIdoing (talk) 17:28, 8 November 2015 (UTC)
It seems one would have to do quite a bit of wikilawyering to make a connection between general stats on seatbelt use and a person's health. Until the connection is made between those stats and health in the article, it really wouldn't count as "health information" as far as common sense would seem to have it. I think you've highlighted a general problem with the current wording of "biomedical" since using this information to imply a health link without a "bio" part should be covered by MEDRS or worded to avoid such an implied link. If the subject is of great interest to health professionals, then we should be able to find acceptable health sources for the information that is more rigorous. Adrian[232] 20:10, 8 November 2015 (UTC)
Adrian, I'm talking about information about rates of seat belt use, used on its own, without any statement or implication about anything, e.g., the entire current contents of Seat belt use rates by country. One need not do any wikilawyering at all to make a connection between general stats on seatbelt use and public health. We could write an entire article on the connection between seat belt use and health (actually, two entire articles, because Health effects of seat belt use and traumatic Seat belt syndrome could both be written), but I'm talking only about the plain old rates: a person either takes or doesn't take a non-medical action, and everyone knows that said action is about health preservation.
It's easy to find good medical sources on this subject, because rates of seat belt use are very interesting to health professionals. Preserving health is the only rational reason to wear a seat belt. But should we declare that it's "ideal" for this non-biomedical information to come from a reputable medical journal, or might an equally (or more) reputable non-medical source also be "ideal" for that information? WhatamIdoing (talk) 20:51, 8 November 2015 (UTC)

Comment: IMO, no additional wording here is "only a clarification." Every definitional word in MEDRS is significant, and will at some point be cited in a content dispute. Health is a broad term, wide open to interpretation, and without a doubt will in practice increase the scope of (attempted) MEDRS application. Changes to MEDRS should be made only with the widest scrutiny and consensus, to discourage dispute, confusion, and undermining of core verifiability and sourcing guidance. --Tsavage (talk) 22:37, 2 November 2015 (UTC)

Is Dean Ornis's book a reliable source?

Dr. Dean Ornish's Program for Reversing Heart Disease shows up on scholar. Is it a reliable source to cite in support of statement that a low fat, plant based diet, exercise and stress management programme has been clinically proved to result in reversal of heart disease? Please qualify your replies.[30] Yogesh Khandke (talk) 05:41, 5 December 2015 (UTC)

No because of WP:MEDRS (also WP:REDFLAG &c.). Alexbrn (talk) 05:48, 5 December 2015 (UTC)
Redflag doesn't apply here, quoting MEDRS would be making a circular argument. Yogesh Khandke (talk) 07:07, 5 December 2015 (UTC)
WP:REDFLAG is part of WP:V which is policy. It applies everywhere. If you're going to make a claim about some lifestyle changes being capable of "reversing" heart disease, this is exceptional. You are going to need multiple super-stength sources (like reviews in top-tier medical journals). Not a popular diet book from one rather controversial guy. Alexbrn (talk) 07:29, 5 December 2015 (UTC)
Have you read the book and its references?Yogesh Khandke (talk) 07:52, 5 December 2015 (UTC)
No, and it wouldn't matter if I had. You have your answer based on WP:PAGs, and it is an obvious case. I suggest we close this (also this is not really the right place to discuss this, the page is for discussion of the MEDRS guidelines itself, rather than questions on particular sources). Alexbrn (talk) 08:05, 5 December 2015 (UTC)
Pasting policy links doesn't help, quote text followed by link, please. What do you make of the fact that it shows up on Google scholar? How significant or otherwise is that? Please qualify reply. Yogesh Khandke (talk) 08:24, 5 December 2015 (UTC)
We are not required to exactly explain each and every aspect of why that is a horrible source – that is just bizarre. Alexbrn has perfectly summarized why it is a horrible source, and if you can't cross-reference his statements with the linked guidelines/policy pages it is not his or any other editors duty to educate you. CFCF 💌 📧 11:12, 5 December 2015 (UTC)
Shut up and get out is no answer. Alexbrn mentions above that this isn't the right place to discuss particular sources, will the right forum be suggested? Pl. Yogesh Khandke (talk) 12:05, 5 December 2015 (UTC)
Despite this not being the right place, you now have your answer. Please don't WP:FORUMSHOP this around as it would be disruptive. You'll get the same answer wherever there are experienced editors. Alexbrn (talk) 12:08, 5 December 2015 (UTC)
Please be civil, don't indulge in personal attacks. I'll wait for others look at this so I've struck off my last comment. Yogesh Khandke (talk) 12:27, 5 December 2015 (UTC)
Agree it is not a good source. Doc James (talk · contribs · email) 12:43, 5 December 2015 (UTC)

I'm sorry no; its not a reliable for MEDRS content.

Sources are reliable per the content they support and are not reliable or non reliable unless the content the source supports is specified. I suspect there are WP compliant sources which identify exercise, diet, and stress management as impacting heart disease so it might be worth checking on that while making sure that what you are looking at in relation to MEDRS content is strictly WP compliant.(Littleolive oil (talk) 17:03, 5 December 2015 (UTC))

Absolutely right, LittleOliveOIl. There is no consensus amongst scientists that a low fat, plant based diet has been clinically proven to reverse heart disease. But I do think that if the OP wants to attribute, ala "According to Dean Ornish, a low fat, plant based diet can reverse heart disease" would be perfectible acceptable. It just doesn't pass the WP:V test to state it like it's an indisputable, proven fact. LesVegas (talk) 18:09, 6 December 2015 (UTC)
One might want to be somewhat more cautious in the statement (e.g., "some kinds of heart disease" rather than all types – I'm pretty sure that even Ornish would not claim that it worked for every single subtype). But you can use WP:INTEXT attribution and careful editing to provide WP:DUE weight to this minority POV. WhatamIdoing (talk) 02:15, 8 December 2015 (UTC)
According to Ornish's book it is clinically proven that a diet, stress management and exercise programme - has demonstrated a reversal in heart disease as demonstrated by various parameters. You could look it up, the programme is also available here for perusal. "This proven, non-invasive program consists of 18, four-hour sessions focused on comprehensive lifestlye changes in four equally weighted elements." And this - "UCLA Health is proud to offer Dr. Ornish's Program for Reversing Heart Disease (Ornish Reversal Program), the only scientifically proven program to stop the progression and even reverse the effects of heart disease. This nationally recognized program has been so effective in undoing years of damage to the heart that Medicare made the decision to cover it under a new benefit category-intensive cardiac rehabilitation-making it the first integrative medicine of its kind to receive this level of support" This is a UCLA[31] Yogesh Khandke (talk) 15:26, 8 December 2015 (UTC)
Well, you could attribute it both to UCLA and Ornish. And it may very well be true, and certainly is "clinically proven" to Ornish's standards. But stating it as though it were an indisputable fact would require consensus statements of several large bodies of scientific groups, such as the NIH, or NHS. We have very large studies that also conflict with that information, such as the Framingham Study, just to name one. Just because UCLA adopts this or Ornish believes this, doesn't allow us to state it as an objective fact and in Wikipedia's voice. LesVegas (talk) 17:43, 8 December 2015 (UTC)
(ec)It's promotion, which is not encyclopedic. It's in many ways the very opposite of what we're trying to achieve through MEDRS. --Ronz (talk) 17:45, 8 December 2015 (UTC)
On the facts, UCLA's claim that this program (of which diet is only one of four equally weighted components) is "the only scientifically proven program to stop the progression and even reverse the effects of heart disease" is wrong. Bariatric surgeons also claim that gastric bypass reverses heart disease (e.g., PMID 17903770).
However, I believe that the low-fat diet is appropriate to mention. Ornish might be one of the most popular promoters of it, but the efficacy of that diet is and has been a significant viewpoint for that subject. The POV shouldn't be excluded merely because Ornish feels more like a salesman than like a scientist. WhatamIdoing (talk) 04:53, 9 December 2015 (UTC)
I used to think that Dean Ornish was a nutrition faddist and maybe a quack, until I read his articles in the major peer-reviewed journals. http://jama.jamanetwork.com/article.aspx?articleid=188274 Intensive Lifestyle Changes for Reversal of Coronary Heart Disease JAMA. 1998;280(23):2001-2007. doi:10.1001/jama.280.23.2001 (among others). I'm sure his work has been mentioned in review articles. One way to find it would be to search Science Citation Index for review articles that mention Ornish's articles, but I don't have access to Science Citation Index since the New York Public Library stopped subscribing. --Nbauman (talk) 17:18, 9 December 2015 (UTC)
That article is from the last century. I think the problem is that Ornish has become more ... alternative in more recent years and his later stuff we won't generally find in MEDRS. Gorski is interesting on this.[32] Alexbrn (talk) 17:23, 9 December 2015 (UTC)

I see an unfortunate situation amongst editors who feel the need to act as the editorial board here on Wikipedia and demand that all research must go through them, or some skeptical pundit, first. We shouldn't be doing that. The question of if Ornish's claims are reliable or not doesn't have anything to do with whether or not Ornish some editors think is a quack or what Gorski says about him, but rather what Ornish's claims are reliable for. No, they are not reliable for the claim that "a low fat, plant based diet, exercise and stress management programme has been clinically proved to result in reversal of heart disease," we have to have broad scientific consensus for a claim like that, but Ornish's statements are reliable for the claim that "According to Dean Ornish, a low fat, plant based diet, exercise and stress management programme has been clinically proved to result in reversal of heart disease." So, yes, Yogesh Khandke, there's no reason you can't add that claim, as long as it's attributed to Ornish. LesVegas (talk) 17:57, 9 December 2015 (UTC)

there's no reason you can't add that claim ← except that, WP:WEIGHT would need to be agreed. That a view exists does not automatically qualify it for inclusion. Secondary sources help evaluate due WP:WEIGHT. I don't know how this works in the case or Ornish, but it is nevertheless something that needs to be weighed. Alexbrn (talk) 18:27, 9 December 2015 (UTC)

Well Gorski doesn't determine what weighs, first of all, and neither does whether or not Ornish has alternative views. Some editors feel the need to censor anything that doesn't jive with an old, rigid 20th century view of medicine and when questioned, say, "well, it doesn't weigh heavily enough". I don't agree with Ornish at all myself, but I think editors should at least strive to be objective enough to admit he's a prominent figure and that his views should be mentioned where they're appropriate, as long as they are attributed. Yeah, if we acted like Ornish was God and wrote his claims in Wikipedia'a voice, that would be undue weight, no doubt. But mentioning what a leading figure believes, where appropriate, is not. LesVegas (talk) 18:37, 9 December 2015 (UTC)
If Ornish has "alternative" views then that is significant since WP:FRINGE comes into play, meaning that the "alternative views" should not be aired unless they're recognizably contextualized within the mainstream view. But I don't think anybody is disagreeing here. Alexbrn (talk) 18:44, 9 December 2015 (UTC)
Alternative views are not the same as fringe views. LesVegas (talk) 19:52, 9 December 2015 (UTC)
To quote WP:FRINGE "We use the term fringe theory in a very broad sense to describe an idea that departs significantly from the prevailing views or mainstream views in its particular field. For example, fringe theories in science depart significantly from mainstream science and have little or no scientific support". So the notion that lifestyle changes can *reverse* heart disease is obviously WP:FRINGE. Alexbrn (talk) 19:58, 9 December 2015 (UTC)
LesVegas—Here I am prompted to remind you that it is neither the purpose of Wikipedia to right great wrongs nor to present divine truth, but to report what is verifiable. CFCF 💌 📧 20:44, 9 December 2015 (UTC)
Wow, this is a clear WP:KETTLE only the world's blackest pot is accusing a chrome kettle of blackness because it's reflecting your black self back to yourself. Righting great wrongs? What, like what you two are doing in trying to censor the encyclopedia from anything alternative minded whatsoever, claiming fringe everywhere you can? All I am suggesting is to add in what's verifiable. And what is verifiable is that Dean Ornish believes a plant based diet reverses heart disease, not that a plant based diet DOES reverse heart disease. When you've removed the plank from your eye, perhaps you'll see clearly enough to reply on point. LesVegas (talk) 22:20, 9 December 2015 (UTC)
"There is no alternative medicine. There is only medicine that works and medicine that doesn't work." Dawkins, 2003—At Wikipedia we present what is verifiable, and presenting fringe views in top level articles is not due weight. CFCF 💌 📧 08:08, 10 December 2015 (UTC)
Ornish has played by the rules and done the work of publishing in the medical literature. With 49 citations in PubMed so far, he may be wrong, but he's not a fringe view and even if he's not in the mainstream, he's at least a significant minority view. --Nbauman (talk) 06:33, 10 December 2015 (UTC)
But his fringe views are not published in the medical literature... Otherwise why would we be discussing using his pop-sci book? CFCF 💌 📧 08:08, 10 December 2015 (UTC)
Exactly. The issue with Orish AIUI is that while de does indeed "play by the rules" in his scientific publishing, he then, free from the shackles of peer-review and editorial oversight, makes pronouncements in the popular media which are inconsistent with the conclusions that could properly be drawn. Any such overblown views are obviously fringe views wrt medicine: if they weren't we would be able to source them from mainstream sources, rather than needing to pluck them out of a lifestyle book. I don't know how "significant" a minority view they are - that would require some measure of following from respected sources I'd suggest. Alexbrn (talk) 08:08, 10 December 2015 (UTC)

A few passing comments:

  • Yes, Ornish's book is "reliable" for what Ornish says. Ornish's view actually is a significant POV. "Fringe" means almost nobody holds that POV. It doesn't mean that the POV is objectively right or wrong.
    • Consequently, the question isn't "Does this diet actually do what he says it does?" The question is "Is this generally accepted?" How shall we find out? Well, let's do a little mental exercise: Imagine that you called up a dozen cardiologists in your area. Imagine that they all take your call. You say, "Hey, my aging mother has ____ heart disease. Should she be following a low-fat diet and eating lots of vegetables instead of bacon and hamburgers?" What do you think the answer will be? Do you think that any of them will object to a low-fat diet, no red meat, and lots of veggies? I'll give you a hint: Some early research on Ornish's Program for Reversing Heart Disease is given a positive review by the American Heart Association. There might be a few that say eliminating saturated and trans fats is more important than the overall level, and you might find a few who wonder whether your hypothetical mother would actually follow the diet, but I suspect that you will find exactly zero in your sample who explicitly reject a low-fat, low-meat, high-vegetable diet.
    • As Nbaumann said, Ornish's research is certainly present in the medical literature. He publishes articles himself (e.g., [33]), and others write about his ideas (e.g., [34]). One might publish a popular book for many reasons ranging from a desire to teach actual patients (rather than other researchers) to a desire to save time in clinical practice by saying "Here, just read this" instead of explaining for the 10,000th time to a desire to be a millionaire. There's nothing wrong with publishing a pop science book. I hear that even some certifiably evidence-obsessed researchers like Ben Goldacre and David Gorski have done that.  ;-)
  • Dawkins' definition of alternative medicine is a (small) minority POV. There is quite a lot of stuff that doesn't work in conventional medicine, e.g., arthroscopic knee surgeries for chronic arthritis. But nobody says that surgery is "alternative" – not even Dawkins. (They have a different name for that: it's bad medicine.)
  • It's not our job, as Wikipedia editors, to decide whether Ornish's book exceeds the evidence in his studies. Really: Not. Our. Job. Whether there's good evidence behind it doesn't really even matter. What matters is whether Ornish's POV is supported by garden-variety non-researcher cardiologists. And it is held by quite a lot of them, and therefore it is DUE to mention the existence of the POV. We don't have to say that it works (although it does, for some people); we don't have to say that it works better than other options (which it probably doesn't). We merely need to say that this POV exists. We don't need to cite the pop-sci book to do this (although, in principle, I have no objection to citing a pop-sci book for such a purpose); Ornish and others have certainly published enough over the years that we could cite the medical literature directly for a claim that this POV exists. WhatamIdoing (talk) 06:41, 11 December 2015 (UTC)
The issue that's been at hand is Ornish's claim of reversing heart disease, not whether a low-fat diet (which Ornish incidentally supports) is a generally good idea. If the idea of a low-fat diet has general currency as being a good thing among cardiogists (and yes I'm sure eating healthily and getting exercise and other lifestyle things are), then I doubt that's down to Dean Ornish's book. Alexbrn (talk) 07:01, 11 December 2015 (UTC)
It doesn't really matter if Ornish's book is the reason that cardiologists believe that a low-fat, plant-based diet, coupled with moderate exercise, stress-reduction practices, and good social support, have positive outcomes for patients. The fact is that they do believe his plan is a (NB: not "the only", but "a") reasonable one for patients to follow. Therefore, this POV should be included somehow. DUE says, "If a viewpoint is held by a significant minority, then it should be easy to name prominent adherents". It is, in fact, very "easy to name prominent adherents", e.g. Ornish (also the authors of The China Study and many others). That's one of the key reasons that we know that mentioning this is DUE and not FRINGE.
On the merits of the diet, I think you need to look past the sales pitch. It's not really hard to "reverse" some heart diseases. Most people can (temporarily) "reverse" dyslipidemia by not eating anything (or not much) for two or three days. Also, I believe you'll find that Ornish's claim is actually "halt or reverse", and that he counts any improvement, no matter how partial or slight, as "reversal". It would be very surprising indeed if a vegetable-heavy diet combined with moderate exercise made heart disease worse, wouldn't it? So why would anyone be surprised to hear that eating more veggies and less fat, while getting moderate exercise, "halts or reverses" heart disease? WhatamIdoing (talk) 07:51, 11 December 2015 (UTC)
cardiologists believe that a low-fat, plant-based diet, coupled with moderate exercise, stress-reduction practices, and good social support, have positive outcomes for patients ... this POV should be included somehow ← I don't think there'd be any argument about that, particularly if the "somehow" meant getting a WP:RS/AC source for asserting what "cardiologists believe". But it's beside the point. The question being posed here was whether Ornish's book could be cited "in support of statement that a low fat, plant based diet, exercise and stress management programme has been clinically proved to result in reversal of heart disease". Alexbrn (talk) 08:00, 11 December 2015 (UTC)
That question has been answered: The pop-sci book is certainly a reliable source (even under MEDRS) for making the statement that Dean Ornish believes this. The obvious follow-up question is, is saying that DUE? IMO the answer is "yes". WhatamIdoing (talk) 15:27, 11 December 2015 (UTC)
Yes for where? If on the article on Dean Ornish, of course—but it certainly is not due at Heart disease or any other top level articles.CFCF 💌 📧 16:26, 11 December 2015 (UTC)
It should definitely be on Dean Ornish, without question. I agree with CFCF that it shouldn't be on Heart Disease unless it's in a section accompanied by a lot of other methods or theories on treatment of heart disease, but I wouldn't think Ornish's theory alone is prominent enough to act as the standalone theory or method to be on that page. Certainly there are other articles where it absolutely belongs, such as Plant-based diet, an article where it isn't at yet. LesVegas (talk) 18:51, 11 December 2015 (UTC)
Agree, otherwise our top-level articles on all manner of chronic diseases would start filling up with diet-based claims of "reversal" sourced to the new wave of very prominent media doctors who seem to be publishing pop-sci books in this area. We might even need to include something in our Death article ;-) Of course if there are well-sourced claims then it all becomes much simpler. Alexbrn (talk) 11:19, 14 December 2015 (UTC)

MEDDATE

Here's an approximate idea of what I'm thinking about MEDDATE issues.

The ideal maximum age for a source depends upon the subject
Subject Example Maximum recommended age
Major topics in a major, actively researched area first-line treatments for hypertension Review articles published within approximately the last five years
Minor topics in a major area treatment of hypertension in a person with kidney cancer Approximately five years or the three most recent review articles, whichever is longer
Major topics in a minor area treatment of cystic fibrosis Approximately five years or at least three reviews, whichever is longer
Minor topics in a minor area treatment of hypertension in a person with cystic fibrosis The several most recent review articles, and any primary sources published since the penultimate review article
Very rare diseases most genetic disorders The several most recent peer-reviewed articles, regardless of absolute age

Does this seem approximately like what you all would expect to find if you were looking for sources? (On the fourth line, it may help to know that hypertension does not seem to be a common complication of cystic fibrosis.) WhatamIdoing (talk) 18:22, 6 October 2015 (UTC)

I like the idea of a table of different variants like this, but it's going to need some work. There are a couple of issues here:
 1. Ideal source are obviously always new up to date systematic reviews regardless of topic area
 2. Do we use number of reviews to determin which are major/minor topics and how do we know ehn a topic falls under any of these groups?
 3. How do we determine if reviews/articles have been published? You have WebOfScience and Scopus for this but very few editors have access. Pubmed doesn't really cut it.
 4. The wording "regardless of absolute age" is problematic because all you need to do is go back to a 1970s East German source and you can promote a wealth of alt-med diseases. I'd be more comfortable is we used something akin to Orphanet [35] to determine what rare diseases are - many old purported diseases are just that, and aren't considered real today.
 5. "any primary sources" is far to inclusive
 6. Best possible sources don't depend on subject but rather how much research has been performed - this means that even some very rare diseases have quite significant bodies of research.
CFCF 💌 📧 20:31, 6 October 2015 (UTC)
I agree that it needs work; that's why I posted it.  ;-)
1: Ideal sources are not always systematic reviews. The ideal sources for treatment efficacy are systematic reviews – assuming any exist – but systematic reviews are not the ideal source for 90% of article content.
2 and 3: I think that the number of reviews available might be one reasonable metric for major/minor (and all the things in between). We can base this on PubMed and treat it as a rebuttable presumption: if I find nothing in PubMed, but you've got access to Scopus and find more, then you can share your information with me. By the way, here are some quick numbers:
  • "Hypertension" is mentioned in 12,730 (tagged) reviews on PubMed in the last five years, and is present in the title of 3,162.
  • "Breast cancer" is mentioned in 6,447 and in the title of 3,741.
  • "Pneumonia" is mentioned in 2,808 and in the title of 695.
  • "Cystic fibrosis" (a heavily researched rare disease) is mentioned in 1,543 and in the title of 692.
  • "Preeclampsia" is mentioned in 779 (plus more under the hyphenated spelling "pre-eclampsia") and in the title of 279.
  • "Down syndrome" is mentioned in 469 and in the title of 160.
  • "Kidney cancer" is mentioned in 225 (plus 174 non-duplicates for "renal cancer") and in the title of just 64 (plus 51 for "renal cancer").
  • "Wilson disease" is mentioned in 62 and in the title of 17 (plus more for "Wilson's disease").
  • "Oculodental digital dysplasia" (incredibly rare disease) is mentioned in zero.
(These are all quoted-phrase searches on PubMed, merely for illustration rather than ideal searches for these subjects.)
As a quick rule of thumb, then maybe this would work: If there are more than 100 hits among reviews published on the subject in the last year, then you should probably be using the "major" criteria for the bulk of your sources. If there are less than 100, then that might not be possible (because "hits" ≠ "reviews actually about the subject"). Or we could build it based on in-title searches: Use good reviews if you've got more than a couple dozen, but when you've only got 20 (or fewer) to choose from, the fact is that the available sources might not cover all of the material that ought to be in the article. For example, there is exactly one review that has both "cystic fibrosis" and "hypertension" in the title during the last five years, and if you need to source a sentence about non-pulmonary hypertension (perhaps to mention the need to control hypertension in advance of getting a lung transplant), then there are zero recent reviews available on that exact subject.
However, I think that most experienced editors are going to have an easy time deciding where a subject falls on the scale. If I have no trouble discovering sources, then it's a major topic. If my searches come up empty, then it's not. You should be using the best of what you've got, unless and until someone demonstrates that better ones; conversely, when better ones don't exist, then you should not be hassled by people who care about the date on the paper more than they care about the content of the article.
4. Bad sources are bad sources. Age is not the sole, or even main, determinant of whether a source is bad.
5. Bad sources are bad sources. Primary vs secondary status is not the sole, or even main, determinant of whether a source is bad.
6. Best possible sources do depend on the subject, because the subject determines how much research has been published. I believe that you meant to say that the best possible sources don't depend upon disease prevalence.  ;-) Also, it's necessary to write these rules to work for non-disease subjects, such as drugs and surgical techniques. WhatamIdoing (talk) 22:00, 6 October 2015 (UTC)
While I appreciate the thinking behind this proposal, I suspect that it will make things worse rather than better. As we all know, there is already a tendency – usually but not always editors who aren't familiar with how to read and use the published literature – to treat MEDRS as a series of yes/no checkboxes that must be met, rather than as a set of rules of thumb which a skilled editor might consider in evaluating a given source-assertion-context triple. (See also the related problem of editors who think that "reliability" is a magical inherent trait possessed by a source, without regard for how or where that source is being used. And editors who had WT:MED watchlisted earlier this year will be familiar with the individual who thought evaluate this article meant make a complete list of its citations older than 5 years and declare them not MEDRS-compliant, regardless of context.)
Creating a more-specific-looking set of criteria increases the tendency for slavish adherence to the letter of the rule rather than to the purpose of the rule. Saying that "most experienced editors are going to have an easy time deciding where a subject falls on the scale" misses the likely source of the problem— most experienced and competent editors already grasp the need for flexibility in applying MEDRS' guidelines. Where a question about a source arises under these new criteria, the discussion will be diverted from the central question of whether or not the source-assertion-context triple at hand is appropriate, and into bickering over whether a particular topic and area are major/major, major/minor, minor/major, or minor/minor. Once that binary categorization is achieved, there will be blind counting of number of reviews or blind adherence to the five-year criterion—which is the same problem we already encounter. And since the new criteria look more specific and 'scientific', then we're probably going to have more trouble dislodging individuals from their mistaken belief that these rules of thumb are etched in stone. TenOfAllTrades(talk) 11:49, 16 October 2015 (UTC)
Thanks for the thoughtful comment.
Part of the problem is structural: We want to tell people to do X if they're looking for new sources/creating new material, but X plus Y if they're evaluating whether an existing statement is okay. As in: If you're writing a new article, then use the best sources you can possibly lay your hands on. But if you're trying to figure out whether Source X verifies Statement 1, then "the best sources" aren't required. You need one that is good enough, but it only has to be barely good enough.
What do you think about killing any mention of five years at all? WhatamIdoing (talk) 21:29, 20 October 2015 (UTC)
I'm certainly open to the idea. The five-year rule of thumb (and its chronic misinterpretation as an iron-clad commandment) may be causing more problems than it solves, these days.
We generally prefer more recent sources, all other things being equal. But all other things are never exactly equal, and what qualifies as "more recent" varies a lot depending on the field, the content, and the context. I think we (Wikipedia editors) sometimes fall down when we over-prioritise recent publication dates over other measures of source quality and reliability. TenOfAllTrades(talk) 02:26, 21 October 2015 (UTC)
Actually, we don't necessarily prefer more recent sources, because of WP:RECENTISM. We do tend to prefer the most recent reviews, of course, because they can consider the impact of more recent primary sources. The "five-year rule of thumb" came into being when it was suggested that in many fields a review cycle (the time between consecutive major reviews) took roughly that amount of time. It's obvious that there is considerable variance in the time before a particular major review becomes superseded by an equally important successor, and unless editors take the time to find the most recent high-quality review, the rule of thumb becomes counter-productive. We really need to be saying something like "In many topics a review conducted more than five to ten years ago will have been superseded by more up-to-date ones, and editors should try to find those", rather than suggesting we reject a perfectly good source solely on the grounds of its age. --RexxS (talk) 15:21, 21 October 2015 (UTC)
As that's a very sensible and intelligible way of telling people what they really need to know, I have boldly replaced the sentence. I've also (separately) expanded it slightly, to reinforce the point that expert opinion doesn't necessarily change every five years. (Revert expected in three, two, one...) I suspect that the definition of chicken pox has been pretty stable for some decades now, so those sources aren't really "out of date" even if they're more than five or ten years old.  ;-) WhatamIdoing (talk) 06:59, 17 November 2015 (UTC)
As you can see at Wikipedia talk:Identifying reliable sources (medicine)/Archive 10#Standardizing the five-year rule, Bluerasberry was also worried about the application of the five-year standard; so it will be interesting to see what he thinks of these changes. Flyer22 Reborn (talk) 08:49, 17 November 2015 (UTC)
@Flyer22 Reborn and WhatamIdoing: Thanks for pinging me Flyer. WAID, I changed your edit to be only five years. I would prefer to not complicate the "5 year rule of thumb" to be a "5-10 year rule of thumb". I hope the idea is the same as what you intended, only simpler. Blue Rasberry (talk) 14:20, 17 November 2015 (UTC)
Hi Blueraspberry. I undid your change (I admit that when I did so, I hadn't realized that the "five to ten" wording was itself quite new) since I didn't want to encourage people to get fixated on five years. We already know that's a problem; in too many editors' minds "five-year rule of thumb" gets truncated to just "five-year rule". In edge cases and less-frequently-published-on topics I'd much prefer to see editors have the necessary conversation on the article talk page rather than shut down discussion with a blind MEDDATE says so. TenOfAllTrades(talk) 14:47, 17 November 2015 (UTC)
TenOfAllTrades I changed what you did to "five or so". Comment? I also do not want this used as a hard rule, but I feel it is very useful that we have some agreement about the rule of thumb being one certain number of years. Blue Rasberry (talk) 14:58, 17 November 2015 (UTC)
We often tell editors that the rule is really up to ten years or so, but it depends upon the subject (e.g., five years for hypertension, ten years for most rare diseases). But the structure of the statement is far more important to me than the numbers used in it. If we're happier with "five years or so" or "approximately five years", or whatever, then I won't object. WhatamIdoing (talk) 02:11, 18 November 2015 (UTC)
I agree to go with "five years or so." But I will note now that it's always irritated me how certain editors figured that the guideline meant that "older than five years" equates to "it's no longer good", even when we clarified WP:Recentism aspects in the guideline. Even the biomedical debating currently going on at this talk page shows that certain editors have interpreted WP:MEDRS too strictly. See this statement I made, which WhatamIdoing thanked me for via WP:Echo. The "two or three" years aspect was also a problem, which is why I'm glad that Blue Rasberry remedied that in the "Standardizing the five-year rule" discussion. Flyer22 Reborn (talk) 09:50, 18 November 2015 (UTC)

Review articles and SPRINT

How do we handle the SPRINT trial? http://www.nejm.org/doi/full/10.1056/NEJMoa1511939 DOI: 10.1056/NEJMoa1511939

WP:MEDRS says that ideal sources are systemic reviews . However, some WP editors have taken the position that systemic reviews, and not clinical articles or editorials, are the only sources we should use.

There are no systematic reviews that include SPRINT.

Does this mean that until it is discussed in a review article, we should ignore SPRINT, in articles like Hypertension and Management of hypertension, where it is not mentioned, and which cite articles from 2014, 2013, 2012 and earlier? --Nbauman (talk)

Wait for secondary coverage. There's no hurry. Alexbrn (talk) 19:19, 1 December 2015 (UTC)
So you advocate leaving the articles with information that is incorrect and misleading, according to the latest published research and expert opinion.
And if they read about SPRINT in the news media, and want to find out more about it in Wikipedia, you don't think we should tell them. --Nbauman (talk) 20:14, 1 December 2015 (UTC)
No, I advocate our articles reflecting properly settled knowledge. Using primary sources and lay press isn't a safe way to do that. Sometimes very significant primary studies may be included, to be decided on a case-by-case basis by the usual process of consensus. Alexbrn (talk) 20:20, 1 December 2015 (UTC)
Agreed, the process on how to cover such topics is clear, take this up at hypertension if you find it should be included. CFCF 💌 📧 20:36, 1 December 2015 (UTC)
In any discussion at Hypertension, they would refer back to MEDRS. I just want to make it clear. You are saying that primary sources can be used on a case-by-case basis, by consensus of the editors of the specific article. Does everybody else here agree with that? --Nbauman (talk) 20:42, 1 December 2015 (UTC)
Of course. Wikipedia is always ruled by consensus over rules (hence WP:IAR). That said, there needs to be an exceptionally strong case to go against the grain of the WP:PAGs and one should always beware the falsity of a spurious WP:LOCALCON. Alexbrn (talk) 20:45, 1 December 2015 (UTC)
(edit conflict) Just read the damn guideline. CFCF 💌 📧 20:46, 1 December 2015 (UTC)
Different people read the guideline and come to different conclusions. Are you willing to accept my conclusions? --Nbauman (talk) 22:44, 1 December 2015 (UTC)
I'm with Nbauman here. SPRINT is Level 1 evidence that has been vetted by a DSMB, the NHLBI, and the editorial staff of the NEJM. The first two even made the decision to terminate the trial early because of how important they felt it would be to communicate the results to the outside world. I'm not saying we should say that current listed recommendations in Hypertension#Management are necessarily out of date. But to not even mention the existence of the trial (in conjunbction with its many important inclusion/exclusion criteria [namely CVD risk and no diabetes]) in the "Research" section of management of hypertension is doing a disservice to our readers. NW (Talk) 23:45, 1 December 2015 (UTC)

It is an excellent question regarding how SPRINT is going to affect medical practice. I discussed this issue at the NIH and with the Cochrane Hypertension Group in the last couple of weeks. Secondary sources are being worked on as we speak and will be out soon.

Yes more aggressive BP management decreased the risk of death by about half a percentage point in those at high cardiovascular disease risk per this trial. While rates of serious adverse events were increased 2.3%.

As others have said "The full importance of this study will not be known until the results are scrutinized by those with a critical eye for methodological biases and this study is viewed as part of the totality of evidence available on this important clinical question."

I agree with User:NuclearWarfare the results should be discussed in a research section of the management of hypertension article. Doc James (talk · contribs · email) 08:28, 3 December 2015 (UTC)

I would just add that the SPRINT trial was published in the NEJM along with 4 other articles commenting on them. Those 4 articles are secondary sources. They're not "third-party" secondary sources, but if the NEJM published a review article next year, I don't think anyone would exclude that under MEDRS. At any rate, SPRINT is not "research" in the same sense that in vitro, animal or database studies are research. --Nbauman (talk) 00:27, 4 December 2015 (UTC)
Analogizing SPRINT to the JUPITER trial: we have two large multicenter RCTs of cardiovascular prevention. When JUPITER came out in the New England Journal of Medicine, there may have been "secondary" sources the same week. But there was also quite a lot of controversy that led to serious disagreement in Archives of Internal Medicine many months later. You would have never known about the disagreement in the field from just reading the text of the JUPITER Trial article, and Wikipedia's readers would have been really poorly served if we had said immediately changed cholesterol and cardiovascular disease to read "everyone over 60 should be getting their CRP checked and taking statins now now now". We can and should mention SPRINT as recent research in the management of hypertension article but waiting until review articles come out to reassess Hypertension#Management seems wise to me. NW (Talk) 00:46, 4 December 2015 (UTC)