The Cass Report (UK)

  • Thread starter Lynch101
  • Start date
  • Tags
    Report Uk
  • #1
Lynch101
Gold Member
767
85
Just wondering if anyone here is familiar with the Cass Report, which was recently published in the UK?

I've been reading a lot of different stuff on it, including criticisms of it's methodology. I was wondering if anyone here might have any insights into the report itself or the criticisms of it?
 
Biology news on Phys.org
  • #2
I do wonder what the "Cass Report" might be about?
 
  • Like
Likes BillTre
  • #3
Report of the Cass Review

Hillary Cass conducted a review of the state of gender medicine in the UK, particularly with regard to children (so, roughly speaking, a review of the processes and decisions around deciding if a child is "really trans" and if so, potentially proceeding to treatment with hormones and surgery). My understanding is that the conclusion was that the field is poorly evidenced, with many studies of "gender questioning" children being methodologically flawed. I think the criticisms boil down to arguments about whether "there's loads of evidence that Cass ignored", or "there's loads of worthless pseudo-science that Cass correctly discarded".

The report has led to an abrupt change in practice in England and Scotland, with the NHS in both countries announcing they've stopped prescribing puberty blockers in this context within days of the final publication. So it's quite politically charged. Given the general polarisation level of the whole "boys can be girls too" debate and the number of agendas in play here I'd be a bit wary of any opinion, to be honest. If I wanted to investigate, I would tend to actually review a paper or two myself and see if the report's criticisms are accurate.

Whether discussing reviews of such studies falls within PF's remit or sails too close to the political sphere I can't judge.
 
Last edited:
  • Like
Likes pinball1970
  • #4
Ibix said:
Whether discussing reviews of such studies falls within PF's remit or sails too close to the political sphere I can't judge.
Yeah, I'll close this thread temporarily for Mentor review. If the thread is allowed, it may be moved to the Medical forum.
 
  • Like
Likes pinball1970, gmax137, Lynch101 and 1 other person
  • #5
Thread reopened after Mentor discussion and moved to Medical forum.
Lynch101 said:
I was wondering if anyone here might have any insights into the report itself or the criticisms of it?
Please keep the discussion confined to this topic only. Any diverging into politics will be shut down.
 
  • Like
Likes berkeman and pinball1970
  • #6
jrmichler said:
Thread reopened after Mentor discussion and moved to Medical forum.

Please keep the discussion confined to this topic only. Any diverging into politics will be shut down.
Thanks for re-opening it.

I'm purely interested to know if the methodology of the review is robust, so the discussion could potentially be limited to principles of systematic reviews.

I think the report is robust, but I've read criticisms of it and am interested to hear from the PF community, who would have a better understanding of these things.

I'm reading up on other information as well, but I figured it would be good to ask here as well.

For example, is anyone familiar with the GRADE rating system? Is it widely used and accepted?
 
  • #7
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
 
  • #8
"Low Quality" ≡ "Unreliable"
(either poorly executed or not enough data to show a statistical difference)
(or sometimes carried out by a researcher of low credibility based on other studies by same researcher (poor reputation))
 
  • Like
Likes Lynch101
  • #9
Lynch101 said:
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
In this field, you might find a study on the incidence of suicidal thoughts pre- and post-treatment. All studies have an attrition rate (people who answer the first time but not the second), but here there's a good chance that some attrition is suicide, and if you don't follow up your non-responders you have a (potentially very large) bias. But following up non-responders is expensive, so it may not happen. So what should a systematic review do with a study where it doesn't? Maybe they can apply some model to estimate a correction for the attrition rate, but unless there's a strongly evidenced model they can use that's just guessing. They may just have to say "the methodology here is too flawed to show anything useful".

Certainly you should have some reasonably objective measure for whether a study is worth including or not (I presume that's what the GRADE system is supposed to do). And you can argue about a particular study's quality. But why would giving little-to-no weight to garbage be bad practice?
 
  • Like
Likes Lynch101 and berkeman
  • #10
Lynch101 said:
Does anyone know if excluding low quality studies from the synthesis of results, in a systematic review, is bad practice?
Of course, it only matters if the low-quality studies yield a different answer than high-quality studies. So the question boils down to "how many low-quality studies would it take to convince you a high-quality study was wrong?" 2? 10? 50?

(Note: this is a general answer - I am not qualified to comment on the quality of any study under discussion)
 
Last edited:
  • Like
Likes Lynch101
  • #11
Ibix said:
In this field, you might find a study on the incidence of suicidal thoughts pre- and post-treatment. All studies have an attrition rate (people who answer the first time but not the second), but here there's a good chance that some attrition is suicide, and if you don't follow up your non-responders you have a (potentially very large) bias. But following up non-responders is expensive, so it may not happen. So what should a systematic review do with a study where it doesn't? Maybe they can apply some model to estimate a correction for the attrition rate, but unless there's a strongly evidenced model they can use that's just guessing. They may just have to say "the methodology here is too flawed to show anything useful".

Certainly you should have some reasonably objective measure for whether a study is worth including or not (I presume that's what the GRADE system is supposed to do). And you can argue about a particular study's quality. But why would giving little-to-no weight to garbage be bad practice?
I would completely agree with the answer to your rhetorical question. There have been (what I believe are) spurious objections to the report, but I want to make sure I'm not missing anything.

For the report a number of peer-reviewed, systematic reviews were commissioned. [Some of] those reviews used the GRADE system (which is widely accepted, I believe). They used predefined inclusion/exclusion criteria and then used an amended version of the Newcastle-Ottawa scale (for non-randomised trials) to assess the quality of the studies. Most were judged to be either of low or moderate quality, with only one hight quality study.

The report then says, "The low quality studies were excluded from the synthesis of results."

This seems perfectly reasonable because, as per your question, why would you give weight to unreliable data?

But some of the objections I have read have claimed that all studies which were not excluded due to the exclusion criteria, should be included in the evidence synthesis.

Is there a nuance between "evidence synthesis" and "synthesis of results" that I am missing perhaps?
 
  • #12
Vanadium 50 said:
Of course, it only matters if the low-quality studies tield a different answer than high-quality studies. So the question boils down to "how many low-quality studies would it take to convince you a high-quality study was wrong?" 2? 10? 50?

(Note: this is a general answer - I am not qualified to comment on the quality of any study under discussion)
Ah yes good point.

I think some people are objecting to the exclusion of low quality studies because they think those studies would support a particular narrative.
 
  • #14
  • #15
Lynch101 said:
Is there a nuance between "evidence synthesis" and "synthesis of results" that I am missing perhaps?
Not that I can see. I think @Vanadium 50's response is spot on.
Lynch101 said:
I think some people are objecting to the exclusion of low quality studies because they think those studies would support a particular narrative.
I think that's exactly what's going on. Cass excludes or ignores a lot of studies that claim to support one side of the debate. She says it's because they're low quality so don't actually add anything to either side; some critics say it's because she pre-decided what the outcome would be.

As I said before, you probably want to read a couple of the low quality studies and Cass' reviews of them and form your own opinion. I suspect the review is fair and her critics are biased, but official investigations certainly can have their conclusions written first.
 
  • Like
Likes pbuk and Lynch101
  • #16
Lynch101 said:
Apologies, I'm not sure I follow. Do you mean you don't consider the GRADE system to be very rigorous?
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
 
  • Skeptical
  • Like
Likes pbuk and Lynch101
  • #17
Bystander said:
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
I'm not sure I see the issue. It seems like a reasonable list of factors one should take into account when judging the strengths and weaknesses of evidence.
 
  • Like
Likes pbuk and Lynch101
  • #18
Bystander said:
It is rigorous rationalization of idiopathic confirmation bias; so, no, I do not consider it to be anything more than psycho-/philoso-babble.
As far as I know, it's a widely accepted standard.

I guess good or bad, the nature of objections isn't so much with the particular system used but more whether it was followed or not.
 
  • Like
Likes pbuk
  • #19
Ibix said:
Not that I can see. I think @Vanadium 50's response is spot on.

I think that's exactly what's going on. Cass excludes or ignores a lot of studies that claim to support one side of the debate. She says it's because they're low quality so don't actually add anything to either side; some critics say it's because she pre-decided what the outcome would be.

As I said before, you probably want to read a couple of the low quality studies and Cass' reviews of them and form your own opinion. I suspect the review is fair and her critics are biased, but official investigations certainly can have their conclusions written first.
Cheers. I'll have to read some of the low quality studies in more detail. I've had a look at the NICE* reviews of them.

I share your suspicions, that the review is fair (to a relatively high degree) and that the critics are biased. I'm in discussion with one such critic who is a researcher, who claims to be writing a paper outlining the criticisms.

I'm just trying to look into some of the criticisms he has already mentioned in public, because I anticipate these will form the basis of the paper - if one is indeed forthcoming and it's not just a face saving claim.

*The National Institute for Health and Care Excellence conducted the systematic reviews of the evidence.
 
  • Like
Likes pbuk
  • #20
  • #21
One could (were one bored enough) argue for a long time about an objective evaluation of subjective data. The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
 
  • Like
Likes Lynch101 and Bystander
  • #22
Bystander said:
I do not consider [GRADE] to be anything more than psycho-/philoso-babble.
You are entitled to your opinion.

Bystander said:
[GRADE] is rigorous rationalization of idiopathic confirmation bias
But that is a personal theory that is contrary to mainstream science (GRADE is a widely recognised tool of evidence-based medicine whose aim is to eliminate confirmation and other biases).
 
Last edited:
  • Like
Likes Lynch101
  • #23
Dullard said:
The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
No, the Cass report does much more than that.

In particular it makes 32 specific recommendations (summarised here) and has led to the National Health Service in England (NHS England) restructuring its provision of gender identity services for children and young people, and changing its clinical policy on the prescription of puberty-supressing hormones.
 
  • Like
Likes Lynch101
  • #24
Dullard said:
One could (were one bored enough) argue for a long time about an objective evaluation of subjective data. The Cass review appears to (correctly, IMO) do little more than point out that there isn't enough reliable data to have much of a conversation.
Cheers Dullard, that is my interpretation of it as well*, but there are attempts to discredit it.

*That there isn't enough reliable data [upon which to base serious medical interventions].
 
  • #25
To be fair, an individual's perception of the report probably boils down to a single question:
Is evidence required to justify treatment, or to prohibit it? The report does have a 'justify' bias (for those who consider that a 'bias.')
 
  • #26
Dullard said:
To be fair, an individual's perception of the report probably boils down to a single question:
Is evidence required to justify treatment, or to prohibit it? The report does have a 'justify' bias (for those who consider that a 'bias.')
I think the point is that the lack of evidence doesn't only mean that we don't know if the treatment does anything, but also that we don't know if it is actively harmful. Where there is reliable evidence, it seems to indicate a higher incidence of psychological issues in these patients than in the general population. If a patient's gender issue is a symptom of something else, that something else needs treating and the gender problems will resolve themselves, whereas treating the gender issue won't fix the underlying psychological problem. And the lack of evidence means that we don't know (our opinions aside) which way around the causation is.
 

Similar threads

Replies
20
Views
2K
Replies
7
Views
2K
Replies
26
Views
3K
  • Biology and Medical
Replies
2
Views
1K
Replies
4
Views
2K
Replies
3
Views
850
  • Biology and Medical
3
Replies
74
Views
9K
  • STEM Academic Advising
Replies
7
Views
524
Replies
1
Views
723
Replies
6
Views
924
Back
Top