Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers
Background:
New effective interventions to attenuate age-related cognitive decline are a global priority. Computerized cognitive training (CCT) is believed to be safe and can be inexpensive, but neither its efficacy in enhancing cognitive performance in healthy older adults nor the impact of design factors on such efficacy has been systematically analyzed. Our aim therefore was to quantitatively assess whether CCT programs can enhance cognition in healthy older adults, discriminate responsive from nonresponsive cognitive domains, and identify the most salient design factors.
Methods and Findings:
We systematically searched Medline, Embase, and PsycINFO for relevant studies from the databases' inception to 9 July 2014. Eligible studies were randomized controlled trials investigating the effects of ≥4 h of CCT on performance in neuropsychological tests in older adults without dementia or other cognitive impairment. Fifty-two studies encompassing 4,885 participants were eligible. Intervention designs varied considerably, but after removal of one outlier, heterogeneity across studies was small (I2 = 29.92%). There was no systematic evidence of publication bias. The overall effect size (Hedges' g, random effects model) for CCT versus control was small and statistically significant, g = 0.22 (95% CI 0.15 to 0.29). Small to moderate effect sizes were found for nonverbal memory, g = 0.24 (95% CI 0.09 to 0.38); verbal memory, g = 0.08 (95% CI 0.01 to 0.15); working memory (WM), g = 0.22 (95% CI 0.09 to 0.35); processing speed, g = 0.31 (95% CI 0.11 to 0.50); and visuospatial skills, g = 0.30 (95% CI 0.07 to 0.54). No significant effects were found for executive functions and attention. Moderator analyses revealed that home-based administration was ineffective compared to group-based training, and that more than three training sessions per week was ineffective versus three or fewer. There was no evidence for the effectiveness of WM training, and only weak evidence for sessions less than 30 min. These results are limited to healthy older adults, and do not address the durability of training effects.
Conclusions:
CCT is modestly effective at improving cognitive performance in healthy older adults, but efficacy varies across cognitive domains and is largely determined by design choices. Unsupervised at-home training and training more than three times per week are specifically ineffective. Further research is required to enhance efficacy of the intervention.
Please see later in the article for the Editors' Summary
Published in the journal:
Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers. PLoS Med 11(11): e32767. doi:10.1371/journal.pmed.1001756
Category:
Research Article
doi:
https://doi.org/10.1371/journal.pmed.1001756
Summary
Background:
New effective interventions to attenuate age-related cognitive decline are a global priority. Computerized cognitive training (CCT) is believed to be safe and can be inexpensive, but neither its efficacy in enhancing cognitive performance in healthy older adults nor the impact of design factors on such efficacy has been systematically analyzed. Our aim therefore was to quantitatively assess whether CCT programs can enhance cognition in healthy older adults, discriminate responsive from nonresponsive cognitive domains, and identify the most salient design factors.
Methods and Findings:
We systematically searched Medline, Embase, and PsycINFO for relevant studies from the databases' inception to 9 July 2014. Eligible studies were randomized controlled trials investigating the effects of ≥4 h of CCT on performance in neuropsychological tests in older adults without dementia or other cognitive impairment. Fifty-two studies encompassing 4,885 participants were eligible. Intervention designs varied considerably, but after removal of one outlier, heterogeneity across studies was small (I2 = 29.92%). There was no systematic evidence of publication bias. The overall effect size (Hedges' g, random effects model) for CCT versus control was small and statistically significant, g = 0.22 (95% CI 0.15 to 0.29). Small to moderate effect sizes were found for nonverbal memory, g = 0.24 (95% CI 0.09 to 0.38); verbal memory, g = 0.08 (95% CI 0.01 to 0.15); working memory (WM), g = 0.22 (95% CI 0.09 to 0.35); processing speed, g = 0.31 (95% CI 0.11 to 0.50); and visuospatial skills, g = 0.30 (95% CI 0.07 to 0.54). No significant effects were found for executive functions and attention. Moderator analyses revealed that home-based administration was ineffective compared to group-based training, and that more than three training sessions per week was ineffective versus three or fewer. There was no evidence for the effectiveness of WM training, and only weak evidence for sessions less than 30 min. These results are limited to healthy older adults, and do not address the durability of training effects.
Conclusions:
CCT is modestly effective at improving cognitive performance in healthy older adults, but efficacy varies across cognitive domains and is largely determined by design choices. Unsupervised at-home training and training more than three times per week are specifically ineffective. Further research is required to enhance efficacy of the intervention.
Please see later in the article for the Editors' Summary
Introduction
Cognitive decline and impairment are amongst the most feared and costly aspects of aging [1]. The age-specific incidence of cognitive impairment is approximately double that of dementia [2],[3] and can be expected to affect 15%–25% of older individuals [2],[4]. Direct medical costs for older adults with mild cognitive impairment (MCI) are 44% higher than those for non-impaired older adults [5]. Because cognitive decline and impairment are essential criteria for dementia and often require informal care [5], interventions aimed at prevention or attenuation of such decline may have a substantial health and economic impact [3].
Several studies have now established strong and independent links between engagement in cognitively stimulating activities throughout the life span and enhanced late-life cognition, compression of cognitive burden, and reduced risk of cognitive impairment and dementia [6]–[8]. Intense interest has therefore focused on the potential of cognition-based interventions in older adults, especially computerized cognitive training (CCT) [9]. CCT involves structured practice on standardized and cognitively challenging tasks [10], and has several advantages over traditional drill and practice methods, including visually appealing interfaces, efficient and scalable delivery, and the ability to constantly adapt training content and difficulty to individual performance [9],[11]. Sales of commercial CCT packages may soon reach US$1 billion per year [12], but the evidence base for such products, at least in older adults, remains unclear [13].
Prior systematic reviews of generic cognitive interventions in healthy older adults [9],[14]–[18] have noted limitations, especially lack of supporting evidence from active-control trials and lack of replication due to inconsistent or indeterminate methodology. Importantly, these reviews pooled data from studies of CCT along with studies of other cognition-based interventions such as mnemonics or cognitive stimulation that can be as simple as reading newspapers or participating in group discussion [15]–[18]. It is therefore perhaps unsurprising that these reviews reached inconclusive results. A more recent systematic review in healthy older adults [9] was not restricted to randomized controlled trials (RCTs) and included CCT studies along with other computerized interventions such as classes in basic computer use.
The effectiveness of CCT in enhancing cognitive performance in healthy older adults is therefore currently unclear, and the impact of design and implementation factors on efficacy has yet to be systematically analyzed. Using data from RCTs of narrowly defined CCT, we aimed to quantitatively evaluate the efficacy of CCT with respect to multiple cognitive outcomes in healthy older adults. Furthermore, we aimed to test the moderating effect of several key study features in order to better inform future CCT trial design and clinical implementation.
Methods
This work fully complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [19] (see Checklist S1). Methods of analysis and inclusion criteria were specified in advance and are documented in Protocol S1.
Eligibility Criteria
Types of studies
Eligible studies were published, peer-reviewed articles reporting results from RCTs of the effects of CCT on one or more cognitive outcomes in healthy older adults.
Types of participants
Eligible studies had mean participant age ≥60 y and participants who lacked any major cognitive, neurological, psychiatric, and/or sensory impairments. Studies with MCI as an inclusion criterion were excluded, as cognitive performance in this population may vary substantially, particularly with respect to variability in the diagnostic criteria of MCI [20].
Types of interventions
Eligible trials compared the effects of ≥4 h of practice on standardized computerized tasks or video games with clear cognitive rationale, administered on personal computers, mobile devices, or gaming consoles, versus an active or passive control condition. Lab-specific interventions that did not involve interaction with a computer were excluded.
Types of outcome measures
Outcomes included performance on one or more cognitive tests that were not included in the training program (i.e., untrained), administered both before and after training. This review is limited to change in performance from baseline to immediately post-training on tests of global cognition, verbal memory, nonverbal memory, working memory (WM), processing speed, attention, language, visuospatial skills, and executive functions. Both primary and secondary outcomes were included. Long-term outcomes, subjective measures (e.g., questionnaires), noncognitive outcomes (e.g., mood, physical), imaging data, and activities of daily living outcome measures were excluded from the analysis.
Information Sources and Search Strategy
We searched Medline, Embase, and PsycINFO using the search terms “cognitive training” OR “brain training” OR “memory training” OR “attention training” OR “reasoning training” OR “computerized training” OR “computer training” OR “video game” OR “computer game”, and by scanning reference lists of previous reviews. No limits were applied for publication dates, and non-English papers were translated. The first search was conducted on 2 December 2013. An updated search was conducted on 9 July 2014.
Study Selection
Two reviewers (A. L. and H. H.) independently screened search results for initial eligibility based on title and abstract. Full-text versions of potentially eligible studies and those whose eligibility was unclear based on title and abstract were assessed by A. L. and H. H., who also contacted authors when eligibility was unclear based on the full report. Disagreements regarding study eligibility were resolved by consulting with M. V., who approved the final list of included studies.
Data Collection and Coding
Coding of outcome measures into cognitive domains was done by two reviewers (A. L. and H. H.) based on accepted neuropsychological categorization [21] or by consensus, and approved by M. V. Table S1 provides the coding of outcomes by cognitive domains. Data were entered into Comprehensive Meta-Analysis (CMA) version 2 (Biostat, Englewood, New Jersey). Data from most studies were entered as means and standard deviations (SDs) for the CCT and control groups at baseline and follow-up, with test–retest correlation set to 0.6. In a few instances, data were entered as post-training mean change [22]–[24] or raw mean difference with a 95% confidence interval [25]. CMA allows for each of these different study outcomes to be flexibly entered into the model. When data could not be extracted from study reports, we contacted the authors requesting raw summary data.
CCT programs were divided into five content types: speed of processing (SOP) training, WM training, attention training, multidomain training, and video games. Video games were defined as computer programs that were distributed for entertainment purposes before they were tried as cognitive interventions [26].
When studies presented data for both active and passive control groups, only the active control group was used as a comparison to the CCT group. When studies presented data from both young and older adults, only data from the older group were analyzed.
Risk of Bias in Individual Studies and Study Appraisal
Risk of bias in individual studies was assessed using the items recommended in the Cochrane's Collaboration's risk of bias tool [27]: sequence generation; allocation concealment; blinding of participants, personnel, and outcome assessors; incomplete outcome data; selective outcome reporting; and other sources of bias. However, because the blinding of therapists and participants in CCT trials is impractical, we considered only blinding of assessors to determine risk of bias in the blinding item. We considered trials with high or unclear risk of bias those that did not include assessor blinding or did not perform intention-to-treat analyses. We considered all other trials as being at low risk of bias. Authors were contacted when the study details were unclear.
In addition, we used the Physiotherapy Evidence Database (PEDro) scale to assess study quality. The PEDro scale is a 11-item scale designed to assess the methodological quality and reporting of RCTs, and is reliable for rating trials of non-pharmacological interventions [28]. As with the risk of bias tool, we did not consider two items in the PEDro scale (blinding of therapists and participants), and therefore the maximum possible PEDro score for studies in this review was 9. All assessments were conducted by H. H. and additional external assessors (see Acknowledgments), and were subsequently reviewed by A. L.
Data Analysis
The primary outcome was standardized mean difference (SMD, calculated as Hedges' g) of post-training change between CCT and control groups. Analyses were conducted for all cognitive results combined, as well as for each of the following cognitive domains: verbal memory, nonverbal memory, WM, processing speed, attention, visuospatial skills, and executive functions (planned analyses of global cognition and language were not performed because of insufficient numbers of studies reporting these outcomes). Precision of the SMD was calculated for each trial by the 95% CI. A positive SMD implies better therapeutic effects over time in the CCT group compared to the control group.
When studies presented data from more than one cognitive test, these were combined in two ways. First, all test results were combined to produce a single SMD per study, following established procedure [29]. Second, tests were classified on their main neuropsychological competency (see Table S1), such that each study could contribute to one or more cognitive-domain-specific SMDs. When outcomes from a given study were combined, the effect estimate was the mean amongst the related tests, and the estimate's variance was scaled up based on an assumed intercorrelation between the tests of 0.7 [30],[31]. All analyses were performed using CMA.
Because we expected studies to report multiple cognitive outcomes and display methodological variability [9],[13], our analyses were planned in three stages. First, in our main analysis we combined all outcomes from each study and pooled these to determine the overall efficacy of CCT in enhancing cognition. Second, we performed domain-specific meta-analyses, in which only studies that reported outcomes on a specified cognitive domain were included, using one combined SMD per study. Third, to examine between-study variability and identify design elements that may moderate observed efficacy, we performed subgroup meta-analyses. In the first and second stages, the overall and domain-specific meta-analyses were performed using a random-effects model. Using the same convention for description of Cohen's d effect sizes applied to Hedges' g, SMDs of ≤0.30, >0.30 and <0.60, and ≥0.60 were considered small, moderate, and large, respectively. Heterogeneity across studies was assessed using the I2 statistic with 95% confidence (uncertainty) intervals [32],[33]. I2 values of 25%, 50%, and 75% imply small, moderate, and large heterogeneity, respectively [33]. Forest plots were also used to visually characterize heterogeneity.
In the third stage, subgroup analyses were based on a mixed-effects model, which uses a random-effects model to generate within-subgroup variance and a fixed-effects model to compare effects between subgroups [34]. Between-subgroup heterogeneity was tested using the Cochrane's Q statistic [27] and was defined significant at the p<0.05 level. The following moderating factors were included in our analysis plan: type of CCT program (i.e., cognitive content of training), delivery format (group or home-based training), session length, session frequency, total duration of the program (dose), control condition (active or passive control), and risk of bias (high or low risk of bias as defined above).
Risk of Bias across Studies
In order to assess risk of publication bias, funnel plots for overall outcomes as well as for each cognitive domain were inspected for asymmetry (i.e., SMDs charted against their standard error) [35]. When ten or more studies were pooled in a given meta-analysis, we formally tested funnel plot asymmetry using Egger's test of the intercepts [36]. A positive intercept implies that smaller studies tended to report more positive results than larger trials. When the test found notable asymmetry (p<0.1), we report primary outcomes based on a fixed-effects model along with a random-effects model, as the former gives more weight to larger trials and helps to counterbalance a possible inflation of therapeutic effect [35]; in these cases we discuss the more conservative effect estimate.
Sensitivity Analyses
For the main analysis (efficacy across all cognitive outcomes), we tested the robustness of our results to parametric variation of the following assumptions: test–retest correlation (set at 0.6 and tested from 0.5 to 0.7), within-study multiple outcome intercorrelation (set at 0.7 and tested from 0.6 to 0.8), inclusion of passive controls instead of active controls in studies with multiple controls (k = 3), and use of a fixed-effects model instead of a random-effects model. These results are reported in Table S5.
Results
Study Selection
After duplicate search results were removed, 6,294 studies were initially screened for eligibility, of which 5,974 were excluded based on abstract and title. Three hundred twenty full-text articles were assessed for eligibility, of which 45 were deemed potentially eligible. After consulting with authors, three studies were excluded because they did not use randomized assignment [37]–[39], and a further two studies because authors did not provide necessary data [40],[41]. The resulting 40 studies from electronic search were supplemented by 11 studies [42]–[52] obtained by scanning reference lists of previous reviews and consulting with researchers, providing a total of 51 articles included in the analysis (Figure 1). Data from one article [53] were split into two studies, resulting in a final number of datasets cited in this review of 52 (for a detailed description of groups selected from each study, see Table S2).
We contacted 51 authors to request detailed summary data, enquire about possible eligibility, or determine risk of bias. Of these, 40 responded and provided information, two responded but did not provide information, and nine did not respond. Data for 14 studies were provided by authors [22],[23],[42],[49],[54]–[63] (see Table S3). The complete dataset is provided here as Dataset S1.
Characteristics of Included Studies
Overall, the 52 datasets included in this review encompassed 4,885 participants (CCT, n = 2,527, mean group size = 49; controls n = 2,358, mean group size = 45; Table 1) and reported 396 cognitive outcomes. Mean participant age ranged from 60 to 82 y, and about 60% of participants were women. The cohorts were largely from the US [22],[25],[42],[45]–[47],[51]–[55],[61],[64]–[76] or Europe [23],[43],[44],[48],[56],[60],[63],[77]–[85], in addition to studies from Canada [57]–[59], Australia [49],[86], Israel [87], China [62], Taiwan Special Administrative Region [88], Republic of Korea [50], and Japan [24]. One study [49] was by authors of this review.
An active control group was used in 26 studies (50%), and assessor blinding was confirmed in 24 (46.2%) of studies. The average PEDro score was 6.2/9 (SD = 1.35), and 35 (66.6%) studies were found to have a high risk of bias (Table S4). As expected, risk of bias and study quality were connected: significant differences in PEDro scores were found for studies with high risk of bias (mean PEDro score = 5.69, SD = 1.08) compared to studies with low risk of bias (mean PEDro score = 7.18, SD = 1.33; t(50) = −4.324, p<0.001).
Type of CCT varied considerably across studies (Table 1). Twenty-four studies used multidomain training, nine used SOP training, nine used WM training, six used attention training, and four were video games. Group (center-based) training was conducted in 32 (61.5%) of the studies, and 19 (36.5%) provided training at home. A study by Berry et al. [55] combined data from participants who trained at home with others who trained in research offices, and was therefore excluded from our subgroup analysis of delivery mode. In a study by Shatil et al. [84], 50 participants received group-based CCT and ten trained at home; data for the latter ten participants were excluded from the analysis (raw data for this study were provided in the online publication). Twenty-nine studies trained participants 2–3 times per week, 17 administered more than three sessions per week, and six administered only one session per week. Results of individual studies are provided in Table S2.
Overall Efficacy on Cognitive Outcomes
The overall effect of CCT on cognition was small and statistically significant (g = 0.28, 95% CI 0.18 to 0.39, p<0.001). Heterogeneity across studies was moderate (I2 = 69.03%, 95% CI 58.87% to 76.68%). The forest plot revealed one conspicuous outlier [65]: this study reported two extremely large SMDs (g>3.0; see Table S2) considered implausible and so was removed from all further analyses. Following this, heterogeneity reduced to a low level, and the summary effect size was reduced (g = 0.22, 95% CI 0.15 to 0.29, p<0.001; I2 = 29.92%, 95% CI 0.63% to 50.57%; Figure 2). The resulting funnel plot did not show significant asymmetry (Egger's intercept = 0.48, p = 0.12; Figure 3). These results were robust to sensitivity analyses around our major assumptions (Table S5).
Domain-Specific Efficacy
Verbal memory
Twenty-three studies reported verbal memory outcomes. The combined effect size was small and statistically significant (g = 0.16, 95% CI 0.03 to 0.29, p = 0.02; Figure 4). Heterogeneity across studies was moderate (I2 = 50.12%, 95% CI 19.31% to 69.16%). The Funnel plot showed potential asymmetry (Egger's intercept = 0.81, p = 0.07; Figure S1). A fixed-effects analysis was therefore conducted and revealed a very small effect size (g = 0.08, 95% CI 0.01 to 0.15, p = 0.03; Figure 4).
Nonverbal memory
Thirteen studies reported nonverbal memory outcomes. The combined effect size was small and statistically significant (g = 0.24, 95% CI 0.09 to 0.38, p = 0.002; Figure 5). Heterogeneity across studies was small (I2 = 24.52%, 95% CI 0% to 60.75%), and the funnel plot did not show evidence of asymmetry (Egger's intercept = 1.75 p = 0.18; Figure S1).
Working memory
Twenty-eight studies reported WM outcomes. The combined effect size was small and statistically significant (g = 0.22, 95% CI 0.09 to 0.35, p<0.001; Figure 6). Heterogeneity across studies was moderate (I2 = 45.55%, 95% CI 15.05% to 65.1%). The funnel plot did not show significant asymmetry (Egger's intercept = −0.1, p = 0.89; Figure S1).
Processing speed
Thirty-three studies reported processing speed outcomes. The combined effect size was moderate and statistically significant (g = 0.31, 95% CI 0.11 to 0.50, p = 0.002; Figure 7). Heterogeneity across studies was large (I2 = 84.53%, 95% CI 79.23% to 88.48%). We detected evidence of unusual funnel plot asymmetry, whereby larger studies reported larger effect sizes (Egger's intercept = −2.99, p<0.01; Figure S1). A fixed-effects analysis revealed a substantially larger effect size (g = 0.58, 95% CI 0.52 to 0.65, p<0.001; Figure 4).
Executive functions
Twenty-nine studies reported outcomes with measures of executive functions. The combined effect size was negligible and statistically non-significant (g = 0.09, 95% CI −0.02 to 0.19, p = 0.096; Figure 8). Heterogeneity across studies was small (I2 = 31.82%, 95% CI 0% to 56.78%). The funnel plot suggested larger effect sizes in smaller studies (Egger's intercept = 0.65, p = 0.097; Figure S1).
Attention
Eleven studies reported attention-related outcomes. The combined effect size was small and non-significant (g = 0.24, 95% CI −0.01 to 0.50, p = 0.06; Figure 9). Heterogeneity across studies was moderate (I2 = 62.97%, 95% CI 28.98% to 80.69%). The funnel plot did not display notable asymmetry (Egger's intercept = 2.61, p = 0.13; Figure S1).
Visuospatial skills
Eight studies reported visuospatial outcomes. The combined effect size was small and statistically significant (g = 0.22, 95% CI 0.15 to 0.29, p = 0.01; Figure 10). Heterogeneity across studies was moderate (I2 = 42.66%, 95% CI 0% to 74.65%). The funnel plot revealed potential asymmetry, suggesting a greater effect in smaller studies (Figure S1), but formal testing was not conducted because of the small number of studies.
Global cognition and language
Planned analyses of global cognition and language were not performed as these outcomes were reported in only three studies each ([24],[50],[88] and [49],[72],[75], respectively).
Moderators of CCT Efficacy
In order to examine the relationship between CCT design choices and training outcomes, we evaluated efficacy in predefined subgroups (Figure 11). Based on all cognitive outcomes, there was a significant difference in the efficacy of group-based training (g = 0.29, 95% CI 0.21 to 0.38, p<0.001) compared to home-based administration (g = 0.09, 95% CI −0.02 to 0.21, p = 0.11; Q statistic for between-group heterogeneity = 7.183, df = 1, p = 0.007). Study-to-study heterogeneity within the group-based training studies was low (I2 = 11.88%, CI 0% to 43%; Q = 35.18, df = 31, p = 0.28; Figure 11). There was also a significant effect for training frequency, with significant effect estimates in studies that administered one (g = 0.34, 95% CI 0.16 to 0.51, p<0.001) or 2–3 sessions per week (g = 0.28, 95% CI 0.18 to 0.37, p<0.001) but not in studies that trained their participants more than three times per week (g = 0.07, 95% CI −0.06 to 0.19, p = 0.28; Q = 9.082, df = 2, p = 0.011). Within-subgroup heterogeneity was low for training either once per week (I2 = 0%, 95% CI 0% to 0%; Q = 1.04, df = 5, p = 0.96) or 2–3 times per week (18.93%, 95% CI 0% to 49%; Q = 34.54, df = 28, p = 0.18). The intersection of these two moderators (group- versus home-based administration and number of sessions per week), i.e., group-based CCT studies that administered 2–3 sessions per week, comprised a subset of k = 25 studies and produced a similar effect estimate: g = 0.29, 95% CI 0.18 to 0.39, p<0.001; Q statistic for within-subgroup heterogeneity = 30.84, df = 24, p = 0.16; I2 = 22.18%, CI 0% to 52.44%.
A similar sequence of moderator analyses for each cognitive domain can be found in Figures S2, S3, S4, S5, S6, S7, S8. A summary of these outcomes is visually presented in Figure 12, a matrix that shows color-coded SMDs for each cognitive domain by each moderating factor. From this figure it is evident that there is no positive evidence for the efficacy of training involving WM (based on either all studies or by subgroup), nor for training administered more than three sessions per week, for any of the cognitive outcomes in this review. At the domain-specific level, evidence for the efficacy of CCT training at home, training only once per week, or in sessions shorter than 30 min is weak.
Discussion
CCT research involving healthy older participants has now matured to a substantial literature, encompassing 51 RCTs of reasonable quality. When examined en masse, CCT is effective at enhancing cognitive function in healthy older adults, but small effect sizes are to be expected. By definition this result pertains to the theoretical “average” older person—it is currently not possible to predict whether a given individual's cognitive abilities will improve beyond normal practice effects. More importantly, the efficacy of CCT depends on particular design choices as well as the cognitive outcome of interest. Moderator analyses revealed the inefficacy of home-based training compared to group-based training, as well as training more than three times a week. Domain-specific analyses found evidence of efficacy for nonverbal memory, processing speed, WM, and visuospatial outcomes, but not for attention and executive functions. Equally important, we found consistent evidence for the likely inefficacy of WM training and the use of brief training sessions.
Evidence of possible publication bias was found only for reports of verbal memory outcomes. In this case a more conservative fixed-effects model was used and found that CCT efficacy in this domain is weak at best (g = 0.08, 95% CI 0.01 to 0.15). Somewhat atypically, the funnel plot for SOP outcomes found that the largest trials tended to find the largest effect sizes. Given that more than half of all participants in this systematic review undertook speed-based training [47],[50]–[55],[59],[69], whose efficacy does not generalize beyond speed-based outcomes (Figure 12), it is possible this is a peculiarity of studies focused on speed training and testing.
Analyses of verbal memory and executive outcomes were sufficiently powered, encompassing 23 and 29 trials, respectively, yet yielded negligible effects. Whilst we recognize that no universal consensus is possible when classifying cognitive tests to particular domains, we consulted a widely cited textbook [21] for this task (see Table S1), and so the negative results for verbal memory and executive outcomes likely represent deficits in the efficacy of CCT in healthy older individuals. Further research aimed at assessing the therapeutic responsiveness of these two key cognitive domains is required, along with development of new and better targeted CCT technology. Consideration should also be given to combining CCT with other effective interventions, such as physical exercise for executive functions [89] and memory strategy training for verbal memory [90].
At the same time, the therapeutic value of several commonly implemented CCT design choices come under question. We found that WM training alone was not effective in healthy older adults, similar to the limited effects reported in a recent meta-analysis in children and young adults [91]. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) [92] is a major trial in progress that involves WM training along with other lifestyle-based interventions, and may shed light on the utility (or lack thereof) of this kind of CCT.
One of the attractions of home-based (often Internet-delivered) CCT is the ability to administer a customized and adaptive intervention in the individual's home, with potential for decreased implementation cost [9] and the facility to target the frail and immobile. However, our formal moderator analysis (based on the conservative Q statistic) revealed a significant interaction between delivery setting and therapeutic outcome, whereby group-based delivery was effective (g = 0.29, 95% CI 0.21 to 0.38) and home-based delivery was not (g = 0.09, 95% CI −0.02 to 0.21). A high degree of consistency amongst group-based training studies suggests that this conclusion is robust (Figure 11). If translated to Mini-Mental State Examination scores, this group-based CCT effect may approximate an average relative improvement of one point [93]. Potentially relevant practice variables when conducting group-based CCT include direct supervision by a trainer to help ensure adherence, treatment fidelity, and compliance; provision of motivational support and encouragement to master challenging tasks that are otherwise easy to avoid; problem solving of IT issues; and nonspecific factors such as social interaction. Indeed, a meta-analysis of memory training in older adults also found that group-based administration was a moderating factor [94]. When conducting CCT, group setting may therefore represent a key therapeutic consideration. Conversely, the popular model of purely home-based training is unlikely to result in cognitive benefits in unimpaired older adults. Future studies may wish to investigate the value of combining initial group-based administration with more long-lasting home-based CCT, as well as test emerging technologies that allow remote clinical supervision and interaction via social media.
We also found interesting evidence for the importance of correct CCT dose. The results suggested that short sessions of less than 30 min may be ineffective, possibly because synaptic plasticity is more likely after 30–60 min of stimulation [95]. By contrast, our analysis clearly identified that training more than three times per week neutralizes CCT efficacy (Figure 11). It is possible that there is a maximal dose for CCT, after which factors such as cognitive fatigue [96] may interfere with training gains. This might not be unique to older persons, as comparative studies in children [97] and young adults [98] have linked spaced training schedules with greater CCT efficacy.
Limitations
To our knowledge, this is the first quantitative meta-analysis of RCTs in the defined field of CCT in cognitively healthy older adults. As opposed to previous reviews that included various cognitive interventions and research designs [9],[14]–[18], we employed strict eligibility criteria, allowing comparison of results across cognitive domains as well as testing of the impact of design factors. However, by way of limitation our results do not necessarily generalize to older impaired persons, especially the high-risk MCI population, where results appear to be mixed [99],[100]. This review also focused on change in neuropsychological measures immediately after the end of training; it therefore provides no indication about the durability of the observed gains, nor their transfer into real-life outcomes such as independence, quality of life, daily functioning, or risk of long-term cognitive morbidity. Because individual RCTs typically report multiple cognitive test results for a particular cognitive domain, these were combined statistically (as per prior practice [30],[31]), but this approach is blind to the relative psychometric merits of the individual tests. More sophisticated analyses may therefore need to be developed that incorporate test-specific weightings when combining test outcomes. Finally, whilst the CCT literature is now substantive in terms of the number of RCTs (k = 51), the typical trial was modest in size (median N = 45). Future studies incorporating supervised group-based delivery and a session frequency of 2–3 sessions per week can anticipate an approximate effect size of g = 0.29, suggesting that a sample of 87 is sufficient to designate power at 0.8 and allow for 15% attrition.
Conclusions
Discussion of CCT tends to focus on whether it “works” rather than on what factors may contribute to efficacy and inefficacy [13],[101]. This systematic review indicates that its overall effect on cognitive performance in healthy older adults is positive but small, and it is ineffective for executive functions and verbal memory. Accurate individual predictions are not possible. More importantly, our analysis shows that efficacy varies by cognitive outcome and is to a large extent determined by design choices. In general, group-based CCT is effective but home-based CCT is not, and training more than three times a week is counterproductive. Consistently ineffective design choices should therefore be avoided. Improving executive functions or verbal memory may require development of new technology or combined interventions. There remains great scope for additional research to further enhance this non-pharmacological intervention for older individuals.
Supporting Information
Zdroje
1. DearyIJ, CorleyJ, GowAJ, HarrisSE, HoulihanLM, et al. (2009) Age-associated cognitive decline. Br Med Bull 92: 135–152.
2. GrahamJE, RockwoodK, BeattieBL, EastwoodR, GauthierS, et al. (1997) Prevalence and severity of cognitive impairment with and without dementia in an elderly population. Lancet 349: 1793–1796.
3. PlassmanBL, LangaKM, McCammonRJ, FisherGG, PotterGG, et al. (2011) Incidence of dementia and cognitive impairment, not dementia in the United States. Ann Neurol 70: 418–426.
4. UnverzagtFW, GaoS, BaiyewuO, OgunniyiAO, GurejeO, et al. (2001) Prevalence of cognitive impairment: data from the Indianapolis Study of Health and Aging. Neurology 57: 1655–1662.
5. ZhuCW, SanoM, FerrisSH, WhitehousePJ, PattersonMB, et al. (2013) Health-related resource use and costs in elderly adults with and without mild cognitive impairment. J Am Geriatr Soc 61: 396–402.
6. MarioniRE, ValenzuelaMJ, van den HoutA, BrayneC, MatthewsFE, et al. (2012) Active cognitive lifestyle is associated with positive cognitive health transitions and compression of morbidity from age sixty-five. PLoS ONE 7: e50940.
7. VergheseJ, LiptonRB, KatzMJ, HallCB, DerbyCA, et al. (2003) Leisure activities and the risk of dementia in the elderly. N Engl J Med 348: 2508–2516.
8. WilsonRS, Mendes De LeonCF, BarnesLL, SchneiderJA, BieniasJL, et al. (2002) Participation in cognitively stimulating activities and risk of incident Alzheimer disease. JAMA 287: 742–748.
9. KueiderAM, ParisiJM, GrossAL, RebokGW (2012) Computerized cognitive training with older adults: a systematic review. PLoS ONE 7: e40588.
10. ClareL, WoodsRT, Moniz CookED, OrrellM, SpectorA (2003) Cognitive rehabilitation and cognitive training for early-stage Alzheimer's disease and vascular dementia. Cochrane Database Syst Rev 2003: CD003260.
11. JakA, SeelyeA, JurickS (2013) Crosswords to computers: a critical review of popular approaches to cognitive enhancement. Neuropsychol Rev 23: 13–26.
12. Commercialising neuroscience: Brain sells. The Economist. Available: http://www.economist.com/news/business/21583260-cognitive-training-may-be-moneyspinner-despite-scientists-doubts-brain-sells. Accessed 11 November 2014.
13. GreenCS, StrobachT, SchubertT (2013) On methodological standards in training and transfer experiments. Psychol Res E-pub ahead of print. doi: 10.1007/s00426-013-0535-3
14. ValenzuelaM, SachdevP (2009) Can cognitive exercise prevent the onset of dementia? Systematic review of randomized clinical trials with longitudinal follow-up. Am J Geriatr Psychiatry 17: 179–187.
15. PappKV, WalshSJ, SnyderPJ (2009) Immediate and delayed effects of cognitive interventions in healthy elderly: a review of current literature and future directions. Alzheimers Dement 5: 50–60.
16. MartinM, ClareL, AltgassenAM, CameronMH, ZehnderF (2011) Cognition-based interventions for healthy older people and people with mild cognitive impairment. Cochrane Database Syst Rev 2011: CD006220.
17. ReijndersJ, van HeugtenC, van BoxtelM (2013) Cognitive interventions in healthy older adults and people with mild cognitive impairment: a systematic review. Ageing Res Rev 12: 263–275.
18. KellyME, LoughreyD, LawlorBA, RobertsonIH, WalshC, et al. (2014) The impact of cognitive training and mental stimulation on cognitive and everyday functioning of healthy older adults: a systematic review and meta-analysis. Ageing Res Rev 15: 28–43.
19. LiberatiA, AltmanDG, TetzlaffJ, MulrowC, GotzschePC, et al. (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6: e1000100.
20. WardA, ArrighiHM, MichelsS, CedarbaumJM (2012) Mild cognitive impairment: disparity of incidence and prevalence estimates. Alzheimers Dement 8: 14–21.
21. Strauss EH, Sherman EMS, Spreen OA, editors (2006) A compendium of neuropsychological tests: administration, norms and commentary. Oxford: Oxford University Press.
22. AngueraJA, BoccanfusoJ, RintoulJL, Al-HashimiO, FarajiF, et al. (2013) Video game training enhances cognitive control in older adults. Nature 501: 97–101.
23. MaillotP, PerrotA, HartleyA (2012) Effects of interactive physical-activity video-game training on physical and cognitive function in older adults. Psychol Aging 27: 589–600.
24. NouchiR, TakiY, TakeuchiH, HashizumeH, AkitsukiY, et al. (2012) Brain training game improves executive functions and processing speed in the elderly: a randomized controlled trial. PLoS ONE 7: e29676.
25. SmithGE, HousenP, YaffeK, RuffR, KennisonRF, et al. (2009) A cognitive training program based on principles of brain plasticity: results from the Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study. J Am Geriatr Soc 57: 594–603.
26. BavelierD, GreenCS, HanDH, RenshawPF, MerzenichMM, et al. (2011) Brains on video games. Nat Rev Neurosci 12: 763–768.
27. Higgins J, Green S, editors (2011) Cochrane handbook for systematic reviews of interventions version 5.1.0. The Cochrane Collaboration.
28. MaherCG, SherringtonC, HerbertRD, MoseleyAM, ElkinsM (2003) Reliability of the PEDro scale for rating quality of randomized controlled trials. Phys Ther 83: 713–721.
29. WykesT, HuddyV, CellardC, McGurkSR, CzoborP (2011) A meta-analysis of cognitive remediation for schizophrenia: methodology and effect sizes. Am J Psychiatry 168: 472–485.
30. Gleser LJ, Olkin I (2009) Stochastically dependent effect sizes. In: Cooper H, Hedges L, Valentine J, editors. The handbook of research synthesis and meta-analysis, 2nd edition. New York: Russell Sage Foundation. pp. 357–376.
31. Borenstein M, Hedges L, Higgins JP, Rothstein HR (2009) Introduction to meta-analysis. Chichester: Wiley.
32. HigginsJP, ThompsonSG (2002) Quantifying heterogeneity in a meta-analysis. Stat Med 21: 1539–1558.
33. HigginsJP, ThompsonSG, DeeksJJ, AltmanDG (2003) Measuring inconsistency in meta-analyses. BMJ 327: 557–560.
34. BorensteinM, HigginsJP (2013) Meta-analysis and subgroups. Prev Sci 14: 134–143.
35. SterneJA, SuttonAJ, IoannidisJP, TerrinN, JonesDR, et al. (2011) Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 343: d4002.
36. EggerM, Davey SmithG, SchneiderM, MinderC (1997) Bias in meta-analysis detected by a simple, graphical test. BMJ 315: 629–634.
37. McDougallS, HouseB (2012) Brain training in older adults: evidence of transfer to memory span performance and pseudo-Matthew effects. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 19: 195–221.
38. SchmiedekF, LovdenM, LindenbergerU (2010) Hundred days of cognitive training enhance broad cognitive abilities in adulthood: findings from the COGITO study. Front Aging Neurosci 2: 27.
39. TheillN, SchumacherV, AdelsbergerR, MartinM, JanckeL (2013) Effects of simultaneously performed cognitive and physical training in older adults. BMC Neurosci 14: 103.
40. GajewskiPD, FalkensteinM (2012) Training-induced improvement of response selection and error detection in aging assessed by task switching: effects of cognitive, physical, and relaxation training. Front Hum Neurosci 6: 130.
41. Wild-WallN, FalkensteinM, GajewskiPD (2012) Neural correlates of changes in a visual search task due to cognitive training in seniors. Neural Plast 2012: 529057.
42. BarnesDE, Santos-ModesittW, PoelkeG, KramerAF, CastroC, et al. (2013) The Mental Activity and eXercise (MAX) trial: a randomized controlled trial to enhance cognitive function in older adults. JAMA Intern Med 173: 797–804.
43. ColzatoLS, van MuijdenJ, BandGP, HommelB (2011) Genetic modulation of training and transfer in older adults: BDNF ValMet polymorphism is associated with wider useful field of view. Front Psychol 2: 199.
44. DahlinE, NybergL, BackmanL, NeelyAS (2008) Plasticity of executive functioning in young and older adults: immediate training gains, transfer, and long-term maintenance. Psychol Aging 23: 720–730.
45. DustmanRE, EmmersonRY, SteinhausLA, ShearerDE, DustmanTJ (1992) The effects of videogame playing on neuropsychological performance of elderly individuals. J Gerontol 47: P168–P171.
46. MahnckeHW, ConnorBB, AppelmanJ, AhsanuddinON, HardyJL, et al. (2006) Memory enhancement in healthy older adults using a brain plasticity-based training program: a randomized, controlled study. Proc Natl Acad Sci U S A 103: 12523–12528.
47. EdwardsJD, WadleyVG, VanceDE, WoodK, RoenkerDL, et al. (2005) The impact of speed of processing training on cognitive and everyday performance. Aging Ment Health 9: 262–271.
48. GoldsteinJ, CajkoL, OosterbroekM, MichielsenM, Van HoutenO, et al. (1997) Video games and the elderly. Soc Behav Pers 25: 345–352.
49. LampitA, HallockH, MossR, KwokS, RosserM, et al. (2014) The timecourse of global cognitive gains from supervised computer-assisted cognitive training: a randomised, active-controlled trial in elderly with multiple dementia risk factors. J Prev Alzheimers Dis 1: 33–39.
50. LeeY, LeeC-R, HwangB (2012) Effects of computer-aided cognitive rehabilitation training and balance exercise on cognitive and visual perception ability of the elderly. J Phys Ther Sci 24: 885–887.
51. VanceD, DawsonJ, WadleyV, EdwardsJ, RoenkerD, et al. (2007) The accelerate study: the longitudinal effect of speed of processing training on cognitive performance of older adults. Rehabil Psychol 52: 89–96.
52. WolinskyFD, Vander WegMW, HowrenMB, JonesMP, MartinR, et al. (2011) Interim analyses from a randomised controlled trial to improve visual processing speed in older adults: the Iowa Healthy and Active Minds Study. BMJ Open 1: e000225.
53. BelchiorP, MarsiskeM, SiscoSM, YamA, BavelierD, et al. (2013) Video game training to improve selective visual attention in older adults. Comput Human Behav 29: 1318–1324.
54. BallK, BerchDB, HelmersKF, JobeJB, LeveckMD, et al. (2002) Effects of cognitive training interventions with older adults: a randomized controlled trial. JAMA 288: 2271–2281.
55. BerryAS, ZantoTP, ClappWC, HardyJL, DelahuntPB, et al. (2010) The influence of perceptual training on working memory in older adults. PLoS ONE 5: e11537.
56. BottiroliS, CavalliniE (2009) Can computer familiarity regulate the benefits of computer-based memory training in normal aging? A study with an Italian sample of older adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 16: 401–418.
57. Garcia-CampuzanoMT, Virues-OrtegaJ, SmithS, MoussaviZ (2013) Effect of cognitive training targeting associative memory in the elderly: a small randomized trial and a longitudinal evaluation. J Am Geriatr Soc 61: 2252–2254.
58. LiKZ, RoudaiaE, LussierM, BhererL, LerouxA, et al. (2010) Benefits of cognitive dual-task training on balance performance in healthy older adults. J Gerontol A Biol Sci Med Sci 65: 1344–1352.
59. LussierM, GagnonC, BhererL (2012) An investigation of response and stimulus modality transfer effects after dual-task training in younger and older. Front Hum Neurosci 6: 129.
60. MayasJ, ParmentierFBR, AndresP, BallesterosS (2014) Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial. PLoS ONE 9: e92269.
61. O'BrienJL, EdwardsJD, MaxfieldND, PerontoCL, WilliamsVA, et al. (2013) Cognitive training and selective attention in the aging brain: an electrophysiological study. Clin Neurophysiol 124: 2198–2208.
62. PengH, WenJ, WangD, GaoY (2012) The impact of processing speed training on working memory in old adults. J Adult Dev 19: 150–157.
63. van MuijdenJ, BandGP, HommelB (2012) Online games training aging brains: limited transfer to cognitive control functions. Front Hum Neurosci 6: 221.
64. AckermanPL, KanferR, CalderwoodC (2010) Use it or lose it? Wii brain exercise practice and reading for domain knowledge. Psychol Aging 25: 753–766.
65. AndersonS, White-SchwochT, Parbery-ClarkA, KrausN (2013) Reversal of age-related neural timing delays with training. Proc Natl Acad Sci U S A 110: 4357–4362.
66. BasakC, BootWR, VossMW, KramerAF (2008) Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychol Aging 23: 765–777.
67. BootWR, ChampionM, BlakelyDP, WrightT, SoudersDJ, et al. (2013) Video games as a means to reduce age-related cognitive decline: attitudes, compliance, and effectiveness. Front Psychol 4: 31.
68. BozokiA, RadovanovicM, WinnB, HeeterC, AnthonyJC (2013) Effects of a computer-based cognitive exercise program on age-related cognitive decline. Arch Gerontol Geriatr 57: 1–7.
69. EdwardsJD, WadleyVG, MyersRS, RoenkerDL, CissellGM, et al. (2002) Transfer of a speed of processing intervention to near and far cognitive functions. Gerontology 48: 329–340.
70. EdwardsJD, ValdesEG, PerontoC, Castora-BinkleyM, AlwerdtJ, et al. (2013) The efficacy of InSight cognitive training to improve useful field of view performance: a brief report. J Gerontol B Psychol Sci Soc Sci E-pub ahead of print. doi:10.1093/geronb/gbt113
71. LegaultC, JenningsJM, KatulaJA, DagenbachD, GaussoinSA, et al. (2011) Designing clinical trials for assessing the effects of cognitive training and physical activity interventions on cognitive outcomes: the Seniors Health and Activity Research Program Pilot (SHARP-P) study, a randomized controlled trial. BMC Geriatr 11: 27.
72. MillerKJ, DyeRV, KimJ, JenningsJL, O'TooleE, et al. (2013) Effect of a computerized brain exercise program on cognitive performance in older adults. Am J Geriatr Psychiatry 21: 655–663.
73. RasmussonDX, RebokGW, BylsmaFW, BrandtJ (1999) Effects of three types of memory training in normal elderly. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 6: 56–66.
74. RichmondLL, MorrisonAB, CheinJM, OlsonIR (2011) Working memory training and transfer in older adults. Psychol Aging 26: 813–822.
75. ShatilE (2013) Does combined cognitive training and physical activity training enhance cognitive abilities more than either alone? A four-condition randomized controlled trial among healthy older adults. Front Aging Neurosci 5: 8.
76. SternY, BlumenHM, RichLW, RichardsA, HerzbergG, et al. (2011) Space Fortress game training and executive control in older adults: a pilot intervention. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 18: 653–677.
77. BrehmerY, WesterbergH, BackmanL (2012) Working-memory training in younger and older adults: training gains, transfer, and maintenance. Front Hum Neurosci 6: 63.
78. BurkiCN, LudwigC, ChicherioC, de RibaupierreA (2014) Individual differences in cognitive plasticity: an investigation of training curves in younger and older adults. Psychol Res E-pub ahead of print. doi:10.1007/s00426-014-0559-3
79. BuschkuehlM, JaeggiSM, HutchisonS, Perrig-ChielloP, DappC, et al. (2008) Impact of working memory training on memory performance in old-old adults. Psychol Aging 23: 743–753.
80. CasuttG, TheillN, MartinM, KellerM, JanckeL (2014) The drive-wise project: driving simulator training increases real driving performance in healthy older drivers. Front Aging Neurosci 6: 85.
81. HeinzelS, SchulteS, OnkenJ, DuongQL, RiemerTG, et al. (2014) Working memory training improvements and gains in non-trained cognitive tasks in young and older adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 21: 146–173.
82. McAvinueLP, GolemmeM, CastorinaM, TattiE, PigniFM, et al. (2013) An evaluation of a working memory training scheme in older adults. Front Aging Neurosci 5: 20.
83. SandbergP, RonnlundM, NybergL, Stigsdotter NeelyA (2014) Executive process training in young and old adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 21: 577–605.
84. ShatilE, MikuleckaJ, BellottiF, BuresV (2014) Novel television-based cognitive training improves working memory and executive function. PLoS ONE 9: e101472.
85. von BastianCC, LangerN, JanckeL, OberauerK (2013) Effects of working memory training in young and old adults. Mem Cognit 41: 611–624.
86. SimpsonT, CamfieldD, PipingasA, MacphersonH, StoughC (2012) Improved processing speed: online computer-based cognitive training in older adults. Educ Gerontol 38: 445–458.
87. PeretzC, KorczynAD, ShatilE, AharonsonV, BirnboimS, et al. (2011) Computer-based, personalized cognitive training versus classical computer games: a randomized double-blind prospective trial of cognitive stimulation. Neuroepidemiology 36: 91–99.
88. WangMY, ChangCY, SuSY (2011) What's cooking?—cognitive training of executive function in the elderly. Front Psychol 2: 228.
89. ColcombeS, KramerAF (2003) Fitness effects on the cognitive function of older adults: a meta-analytic study. Psychol Sci 14: 125–130.
90. GrossAL, ParisiJM, SpiraAP, KueiderAM, KoJY, et al. (2012) Memory training interventions for older adults: a meta-analysis. Aging Ment Health 16: 722–734.
91. Melby-LervågM, HulmeC (2013) Is working memory training effective? A meta-analytic review. Dev Psychol 49: 270–291.
92. KivipeltoM, SolomonA, AhtiluotoS, NganduT, LehtisaloJ, et al. (2013) The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER): study design and progress. Alzheimers Dement 9: 657–665.
93. AnsteyKJ, BurnsRA, BirrellCL, SteelD, KielyKM, et al. (2010) Estimates of probable dementia prevalence from population-based surveys compared with dementia prevalence estimates based on meta-analyses. BMC Neurol 10: 62.
94. VerhaeghenP, MarcoenA, GoossensL (1992) Improving memory performance in the aged through mnemonic training: a meta-analytic study. Psychol Aging 7: 242–251.
95. LuscherC, NicollRA, MalenkaRC, MullerD (2000) Synaptic plasticity and dynamic modulation of the postsynaptic membrane. Nat Neurosci 3: 545–550.
96. HoltzerR, ShumanM, MahoneyJR, LiptonR, VergheseJ (2011) Cognitive fatigue defined in the context of attention networks. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 18: 108–128.
97. WangZ, ZhouR, ShahP (2014) Spaced cognitive training promotes training transfer. Front Hum Neurosci 8: 217.
98. PennerIK, VogtA, StcklinM, GschwindL, OpwisK, et al. (2012) Computerised working memory training in healthy adults: a comparison of two different training schedules. Neuropsychol Rehabil 22: 716–733.
99. GatesNJ, SachdevPS, Fiatarone SinghMA, ValenzuelaM (2011) Cognitive and memory training in adults at risk of dementia: a systematic review. BMC Geriatr 11: 55.
100. Bahar-FuchsA, ClareL, WoodsB (2013) Cognitive training and cognitive rehabilitation for mild to moderate Alzheimer's disease and vascular dementia. Cochrane Database Syst Rev 6: CD003260.
101. JaeggiSM, BuschkuehlM, JonidesJ, ShahP (2011) Short- and long-term benefits of cognitive training. Proc Natl Acad Sci U S A 108: 10081–10086.
Štítky
Interné lekárstvoČlánok vyšiel v časopise
PLOS Medicine
2014 Číslo 11
- Parazitičtí červi v terapii Crohnovy choroby a dalších zánětlivých autoimunitních onemocnění
- Statiny indukovaná myopatie: Jak na diferenciální diagnostiku?
- Význam hydratace při hojení ran
- DESATORO PRE PRAX: Aktuálne odporúčanie ESPEN pre nutričný manažment u pacientov s COVID-19
- Projekt MedPed
Najčítanejšie v tomto čísle
- Act II of the Sunshine Act
- Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers
- What Could Computerized Brain Training Learn from Evidence-Based Medicine?
- Sublingual Misoprostol versus Intramuscular Oxytocin for Prevention of Postpartum Hemorrhage in Uganda: A Double-Blind Randomized Non-Inferiority Trial