> [...] asked some statistician colleagues if they could help us recover more information from his data.
It's a shame more organisations don't have access to statistician helpers to ensure that they are being accurate and honest when seeking, interpreting, and presenting data. Perhaps this is something else that is a result of the dominance of Excel - people have collections of numbers and you can pummel them into a spreadsheet and produce some nice charts and graphs but that leads to people over-interpreting the data.
> After a lot of work, the answers were, by and large, that we couldn’t see any such differences in our data.
This is surprising to me. I remember reading the blogs around the time and it seemed like a sensible claim. I can't remember anyone digging into the data and pointing out flaws. Did they?
I think I believed it because I feel "unteachable".
EDIT: I freaking love this paper because of its discussion of a mistake made during a phase of mental ill health, and the recovery journey afterwards.
> This is surprising to me. I remember reading the blogs around the time and it seemed like a sensible claim. I can't remember anyone digging into the data and pointing out flaws. Did they?
I'm not sure that was possible. I haven't re-read the original 2006 paper, but it sounds like the claims in the 2006 paper may simply have been false:
> I did a number of very silly things whilst on the SSRI and some more in the immediate aftermath, amongst them writing “The camel has two humps”. I’m fairly sure that I believed, at the time, that there were people who couldn’t learn to program and that Dehnadi had proved it. Perhaps I wanted to believe it because it would explain why I’d so often failed to teach them. The paper doesn’t exactly make that claim, but it comes pretty close. It was an absurd claim because I didn’t have the extraordinary evidence needed to support it. I no longer believe it’s true. I also claimed, in an email to PPIG, that Dehnadi had discovered a “100% accurate” aptitude test (that claim is quoted in (Caspersen et al., 2007)). It’s notable evidence of my level of derangement: it was a palpably false claim, as Dehnadi’s data at the time showed.
1.5.1.2 For people with moderate or severe depression, provide a combination of antidepressant medication and a high-intensity psychological intervention (CBT or IPT).
So, for moderate or severe depression, the standard initial treatment is an SSRI and therapy.
For less severe depression, though, the guidance is to start with non-pharmaceutical options, and only move to drugs if those don't work.
I'm not saying that 2014 treatment is magical. But there are some important differences:
There's now a recognition of "subthreshold depressive symptoms" - which are troubling and unpleasant but which either would have been missed in the past or would have been treated solely with medication.
Other stuff is much more important now. "A wide range of biological, psychological and social factors, which are not captured well by current diagnostic systems, have a significant impact on the course of depression and the response to treatment."
We're using DSM IV, not ICD10, which "* also makes it less likely that a diagnosis of depression will be based solely on symptom counting.*"
To get the therapy in the UK the person would self-refer to an IAPT (improved access to psychological therapy) style course. That would carry some kind of assessment of need, and the person would thus have another check (the first would be the GP) to see if they need specialist secondary care.
The important stuff here for the OP is much more concentration on therapy not just medication; and much more concentration on how the person is coping with life not just counting symptoms. Of course, some places do this much better than others.
Obviously http://blog.codinghorror.com/separating-programming-sheep-fr...
> My physician put me on the then-standard treatment for depression, an SSRI.
2014 is much better. http://www.nice.org.uk/guidance/CG90
> [...] asked some statistician colleagues if they could help us recover more information from his data.
It's a shame more organisations don't have access to statistician helpers to ensure that they are being accurate and honest when seeking, interpreting, and presenting data. Perhaps this is something else that is a result of the dominance of Excel - people have collections of numbers and you can pummel them into a spreadsheet and produce some nice charts and graphs but that leads to people over-interpreting the data.
> After a lot of work, the answers were, by and large, that we couldn’t see any such differences in our data.
This is surprising to me. I remember reading the blogs around the time and it seemed like a sensible claim. I can't remember anyone digging into the data and pointing out flaws. Did they?
I think I believed it because I feel "unteachable".
EDIT: I freaking love this paper because of its discussion of a mistake made during a phase of mental ill health, and the recovery journey afterwards.