Skip to content(if available)orjump to list(if available)

Cognitive training: A field in search of a phenomenon

hackerlight

> We demonstrate that this optimism is due to the field neglecting the results of meta-analyses and largely ignoring the statistical explanation that apparent effects are due to a combination of sampling errors and other artifacts.

Oh where have I heard this before? Medicine, psychology, economics, and every other high-level empirical discipline that's trying to tease out small effects which ends up chasing its tail as statistical noise and publication bias trick the community into thinking something is going on, which then wastes decades of research effort and public money. Systemic reform into the culture of how empirical social sciences is carried out is really needed.

tuf7fijrvtc

One of those things is not three of those things. Medicine and Economics I'd argue have far stronger track records than Psychology which bends theory enough to qualify for the Olympics at even the vaguest sign of political pressure. Yielding over and over to LGBT activism killed most of Psychology's credibility.

light_hue_1

Yes. Everyone adjacent to this field has known for a long time that it's just a bunch of charlatans and liars looking to prey on older people and people with disabilities.

There are so many startups (like say https://www.neurotrackerx.com/) devoted to taking people's money based on nonsense "science". Where even if the science was true, the effect sizes are so small as to be meaningless.

torginus

I just wonder how do you get so many prestigious universities, companies, medical institution etc. to endorse your product if it basically doesn't work?

light_hue_1

Universities don't endorse products. Faculty members do. And they do so in exchange for advisor spots which come with monthly cheques. Companies endorse things only when their bottom line increases. Agencies give grants, they don't endorse anything. Funding agencies don't want to fall behind, they want to see if technology works, it's perfectly normal for them to hand out small amounts of money and run trials. That's not an endorsement, even though companies treat it that way.

There are countless totems and lucky charms that people use to "increase their performance" from blowing on dice to your favorite rabbit's foot. There is just as much evidence for blowing on dice helping as for neurotracker helping.

pmontra

If there are customers, they sell the product. Most of the stuff we buy everyday isn't as good as in the ads or it does us any good at all (maybe it damages us.)

mjburgess

There's no real downside. They're part of the scam.

I think this largely applies to AI (et al.) too. The universities are the enables of a system of "commercialisable pseudoscience", and are in bed with all those companies profiting from it

torginus

Does that mean that I can go to UC Berkeley (it's on the website) with a sufficiently big bag of bills, and they'll endorse my miracle ointment that supposedly cures cancer?

drdrek

Its not a case of the scientific community being unaware of this, even the Wikipedia page of "Cognitive training" has the paragraph in the summary:

"Scientific investigation into the effectiveness of brain training activities have concluded that they have no impact on intelligence or everyday cognitive ability, and that most programs had no peer reviewed published evidence of their efficacy."

Its just a bunch of people that profit from it and no amount of negative research papers will dissuade them as long as there is money to be made.

Madmallard

Do you know what actually provides broad cognitive and academic benefits that is of no surprise to the average person?

Eating healthy, being social, and aerobic exercise.

Wow! So groundbreaking.

she11c0de

Agreed. I'd add sleep to the list.

a9h74j

> As is clear from the empirical evidence reviewed in the previous sections, the likelihood that cognitive training provides broad cognitive and academic benefits is very low indeed; therefore, resources should be devoted to other scientific questions—it is not rational to invest considerable sums of money on a scientific question that has been essentially answered by the negative. In a recent article, Green et al. (2019) took the exact opposite of this decision—they strongly recommended that funding agencies should increase funding for cognitive training. This obviously calls for comments.

I wonder how the lack of direct results relate to cognitive rehabilitation for persons with some medical etc issue. Could one value be to make it very evident to a person that they do have a deficit and need to work on compensation strategies? So measured by overall well-being cognitive rehabilitation could produce positive measurable outcomes?

colechristensen

There's a problem I have with this kind of science.

It boils down to "We tried to achieve X by doing Y and failed, therefore X isn't possible".

Assuming Y is the correct way to do X and then giving up. Instead it should almost be something like a competition. "We tried to achieve X by doing Y_{1..n} and these n had statistically significant effects of these sizes"

When you have a lot of people trying to accomplish something, some of them will end up being successful if the thing is possible, but there is an art to finding that optimum and published science really often seems to fall short.

scott_s

I think you are mischaracterizing this meta-analysis. First, a meta-analysis is never about just trying one thing; that's what a single study is for. A meta-analysis looks at many studies, over a long period of time and tries to find consistent patterns that emerge from the whole. That's how we try to systematically figure out if results are by chance, or if there's a real effect. The pattern these authors are pulling from all of these studies is there is no real effect. When you find no real effect over many studies for a long time, you inevitably have to confront the question: is what we're trying to do possible?

mlyle

> When you find no real effect over many studies for a long time, you inevitably have to confront the question: is what we're trying to do possible?

Devil's advocate: The same would apply to say, Alzheimer's treatments, where we've had no real effect over many studies for a long time. Should we give up?

colechristensen

>When you find no real effect over many studies for a long time, you inevitably have to confront the question: is what we're trying to do possible?

This is what I'm criticizing. The methodology for searching for real effects is broken.

SubiculumCode

one problem with meta analyses is that they often collapse studies accross theoretically important details, and thus unsurprisingly find weak or inconsistent effects. I saw this in a lot of metaanalyses of memory function.

thrown_22

We also have Q P S R T U V and W that need funding and aren't getting it because X is getting all of it.

If X is too hard try something else that's easier.

hmahncke

>> Examples include the ACTIVE trial, commercial brain-training games (e.g., Neuroracer, Lumosity, and BrainHQ), and multidomain training programs (Binder et al., 2016; Buitenweg et al., 2017; Duyck & Op de Beeck, 2019). To date, none of these regimens have shown compelling evidence, or any evidence at all, of training-induced far transfer to either cognitive tests or real-life skills >>

ACTIVE showed that cognitive training slowed decline in instrumental activities of daily living [1], and that adaptive computerized speed training in particular reduced at-fault car crashes [2], reduced depressive symptoms [3], and most importantly reduced the incidence of dementia [4]. The NIH is spending tens of millions of dollars on follow-up trials to extend the results.

To dismiss ACTIVE in the brief paragraph is...startling.

>> We demonstrate that this optimism is due to the field neglecting the results of meta-analyses >>

A strong statement from a paper that doesn't seem to cover multiple positive meta-analyses of cognitive training [5, for example].

In my view, if you read a lot of papers in this field (and I do), the pattern is that negative articles generally focus on working memory training and effects on IQ or "generalized cognitive ability" (whatever that is); and positive articles generally focus on neurocognitive measures and real-world functional measures. One reasonable interpretation [and there are many!] is that programs focused on using working memory techniques to improve IQ are not generally effective, and programs using speed/attention training to improve specific aspects of cognitive and real-world performance are effective.

Meanwhile, out in the clinical world, cognitive training is now recommended by clinical guidelines from the American Academy of Neurology and the World Health Organization, and offered as a benefit by a dozen Medicare Advantage plans around the country.

Disclaimer: I work at BrainHQ, and have published in this field. Further disclaimer, a HN comment isn't an academic article.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4055506/ [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057872/ [3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2657170/ [4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5700828/ [5] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7050567/

tgv

ACTIVE doesn't have much to show for it, does it? Effect sizes are small, and some are almost absent, while there are differences in other factors between the groups with the same magnitude (drinking, cholesterol, diabetes, subject withdrawal, etc.). The calculation of the effect size is also dubious: it assumes the changes are linear and independent of the starting position, and the baseline is not within the group, although I can't see an easy way to get an objective one either.

As to the statistics: it's p-values all over, with all of the associated problems. The p-value for article [4] gets just below the 0.05 mark, but only for speed training. They also do two other comparisons, which are nowhere near significant. That is really suspicious, and there's no correction for multiple comparisons.

All in all, this short inspection doesn't convince me the OP article has it wrong.

hmahncke

For folks who dislike p-value reporting (and I'm one of them), we can focus on the effect sizes for speed of processing training in ACTIVE:

[1] 0.36 in slowing decline in functional abilities, equivalent to ~3 years of delayed decline [2] 48% reduction in at fault auto crash risk [3] 30% reduction in the risk of experiencing serious [0.5 s.d.] worsening of depressive symptoms [4] 29% reduction in dementia incidence [hazard ratio]

These are all clinically meaningful effect sizes.

Regarding the dementia incidence study, it's correct that two of the cognitive training interventions did not show effects, and speed of processing training did. In my view, a straightforward interpretation is that different types of cognitive training are different (much like different small molecule pharmaceuticals are different), and consequently they have different effects on endpoints like dementia incidence.

kensai

I don't disagree with you, generally, but if you had hit the first of the 5 quoted papers the OP linked, you would have seen that the effect size at least in a study was medium. Not small.

tgv

I did check [1]. That's where the differences between the four groups can be found. There are 12 effect sizes, only two or three medium (≥0.5). Some of the others are even negative. It's pretty suspicious that the type of training matters so much. It's as if it's training to the test.

But if this is what it is, then by all means, do cognitive training for the elderly: it can't do any harm. But there doesn't seem to be any point in further research.