This is Part 1 of a four-part series. See also Part 2, Part 3, Part 4. Read a PDF of the entire series.
19 December 2012. As populations age worldwide and the number of people with dementia is set to soar over the next few decades, a crisis in eldercare looms. At the same time, the use of personal technology—smartphones, tablets, wearable monitors—is exploding. Can technology help society avert the crisis? Some researchers envision a future in which older adults with cognitive decline or Alzheimer’s disease could stay independent longer with the help of technology. Robots and interactive computers would aid an impaired senior to complete simple tasks. Monitoring systems would send an alarm to relatives if the person fell, skipped medication, or was otherwise in difficulty. Interactive computer games and online communities tailored to people with dementia would provide cognitive and social stimulation that might slow decline. Sound like science fiction? These ideas are being tested now in research studies. Some of the technology is on the market, and much of the rest may become available within the next three to five years, researchers predict.
It remains to be seen how well this technology will work. How comfortable will AD patients and their families be with it? What problems will it bring? As with other technological advances, society will grapple with the question of how to maintain dignity, privacy, and the carer’s human touch as machines become more integrated into our lives.
Technology for the Elderly
In 2005, the Center for Aging Services Technologies (CAST), an international coalition of technology companies, research universities, and aging services organizations, produced a video for the White House Conference on Aging. It showed how a hypothetical family might care for an aging relative in the future (click image below to view video). An actor portrays an elderly man with cognitive decline who continues living alone in his home, thanks to a suite of technology. Home monitors alert his family to any changes in his health or routine. Automated systems turn off the gas stove if he forgets. Video connections allow him to chat with distant family members, and computer games help maintain his cognitive skills. When the video was made, most of the featured technology did not yet exist, said Majd Alwan, CAST’s executive director. “Today, the vast majority of these technologies are a reality and are available on the market.”
Click on the image to watch the video.
Technology in the home might help seniors stay independent. Image courtesy of the LeadingAge Center for Aging Services Technologies
Not only does such technology hold hope for improving the quality of life for people with dementia and their caregivers, it may also advance research and empower clinical trials, researchers point out. Sensitive, computer-based tests may be able to detect subtle signs of cognitive change long before clinical symptoms develop, allowing researchers to recruit preclinical patients into prevention trials. Monitoring systems will provide rich, low-cost datasets for tracking the progression of decline, enabling scientists to detect small improvements due to treatment. This will be more reliable, valid, and sensitive than current clinical assessments, which are taken too infrequently and are prone to huge variation, predicted Jeffrey Kaye at the Oregon Health and Science University, Portland. Such innovations may reduce the size and expense of clinical trials, allowing more agents to be tested, agreed Stephen Bonasera at the University of Nebraska Medical Center, Omaha.
This Alzforum series looks at the types of technology now under development and how they might be used. Part 1 covers early diagnosis of cognitive problems, and Part 2 will discuss monitoring technology. Part 3 will describe innovations that aim to help care for seniors with dementia and lessen the burden on caregivers. The final installment will showcase attempts to intervene in the illness and strengthen cognitive skills, either through prevention or therapy.
The featured studies are but a sample of those under development. The field of “gerontechnology” is rapidly growing, with many companies, big and small, offering products in this area. Several large research groups specialize in gerontechnology, for example, the Oregon Center for Aging and Technology (ORCATECH) in Portland and the Technology Research for Independent Living (TRIL) in Dublin, Ireland. The International Society for Gerontechnology holds a biannual conference and publishes its own journal, and the Gerontological Society of America includes a technology subgroup. The recent Alzheimer’s Challenge 2012 awarded a total of $300,000 to five finalists for their technological tools to better diagnose and monitor people with AD (see ARF related news story). Alzforum invites comments on additional technologies of interest.
Canary in a Coal Mine: Earliest Warning Signs of AD
Computerized commercial tests, such as the Cambridge Neuropsychological Test Automated Battery (CANTAB), the Cognitive Drug Research battery, and the CogState computerized cognitive tests, are already used in research and clinical trials. They track cognitive decline and can detect treatment effects in early AD. However, with recent attempts to define a preclinical stage of AD (see ARF related news story) and growing interest in conducting preventive trials (see ARF related news story), researchers need to find more sensitive methods to detect the earliest cognitive changes. “Much of the current debate centers around what tests to use in prodromal or preclinical patients,” said John Harrison at Metis Cognition, Warminster, Wiltshire, U.K. Harrison developed a neuropsychological test battery called the NTB. He pointed out that although traditional instruments like the Alzheimer’s Disease Assessment Scale-cognitive subscale (ADAS-cog) include some memory tests that pick up deficits in very early AD patients, on most of the measures almost everyone in the mild stages of AD scores at the maximum. In other words, the ADAS-cog has "ceiling effects," so it cannot pick up improvements in response to drugs (see, e.g., Nature news story).
Many companies are therefore developing more sensitive tests, although most are not ready for prime time, Harrison said. The majority of such tests focus on measures of episodic memory, working memory, and executive function. They use repeated testing of the same person over time to detect change. This strategy picks up people with high levels of education who still score above the population mean, but whose cognition is in decline. One test gaining popularity for detecting prodromal AD is the free and cued selective reminding test (FCSRT), in which participants must remember 16 pictures, first without aid and then with the help of cues. Low scores on this test have been shown to associate with AD cerebrospinal fluid biomarkers in people with mild cognitive impairment (see Wagner et al., 2012). The FCSRT is recommended by research diagnostic criteria for prodromal AD (Dubois et al., 2010). In the future, these types of cognitive screening tests could become part of a standard annual checkup, Harrison suggested.
Another problem that dogs cognitive testing is the practice effect, where growing familiarity allows a person to improve on the test (see ARF CTAD story). Harrison claims that computerized tests beat the standard paper-and-pencil variety hands down when it comes to training effects. With computers, the test can change each time a person takes it. For example, in the Mini-Mental State Exam, clinicians often give patients the same three words (apple, penny, table) to try to remember every time they take the test. Over time, even impaired patients learn the words. Anecdotally, researchers tell of patients coming in for study visits asking, "Hi doc, are you going to test me on apple, penny, table again today?" By contrast, a computerized test can choose from hundreds of randomly generated words, so that each time patients take it, they see a different set. Digital tests have the added advantage of being able to measure things like reaction time, which provide additional clues to early impairments.
An example of this is the Digital Clock Drawing Test (dCDT) under development by Dana Penney at the Lahey Clinic, Burlington, Massachusetts, and Randall Davis at MIT. A finalist in the Alzheimer’s Challenge 2012 (see ARF related news story), this technology adapts the traditional paper-and-pencil clock drawing test to a digital format. In the paper version, clinicians suspect cognitive problems when patients cannot draw a normal-looking clock. In the digital version, people use a digitizing ballpoint pen to draw on paper; the pen then transmits the data to a computer, where the dCDT software analyzes what they draw and how they draw it. Even when the final product looks normal, the computer can detect changes in the process. People with early cognitive impairment hesitate more and spend more time thinking rather than drawing compared with healthy controls. Penney and Davis also report that the test can measure executive function by the presence of very tiny “hooklets” that occur when drawers think about making the next stroke before they finish the current one. Often only half a millimeter long, these hooklets are a good thing; their disappearance may indicate declining executive function. Early evidence suggests that the Digital Clock Drawing Test can pick up preclinical cognitive changes that correlate with atrophy in the parietal lobe, the researchers claim. Penney and Davis are collaborating with Rhoda Au at Boston University and the Framingham Heart Study to validate the test in the FHS’ longitudinal dataset to see whether it can detect presymptomatic changes in people who test positive for AD biomarkers.
Clock drawn by a person with MCI appears normal, but the computer detects hesitation that betrays impairment. © 2012 Lahey Clinic and MIT
Visual deficits are another early warning sign of cognitive decline, said Creighton (Tony) Phelps at the National Institute on Aging, Bethesda, Maryland. People in the early stages of decline perform poorly on some standard ophthalmology exams such as pattern and contrast detection, and show deficits in visual memory (see, e.g., Kawas et al., 2003; Cronin-Golomb et al., 1995). For example, a test of visual short-term memory developed by Mario Parra at the University of Edinburgh, U.K., detects memory problems in young, asymptomatic AD mutation carriers (see ARF related news story). Vision changes may precede AD diagnosis by more than a decade. They seem to reflect deterioration in the limbic system and medial temporal lobe—areas crucial for visual perception (see Rosen, 2004). Researchers in the ophthalmology industry have expressed interest in modifying their equipment to screen for AD, Phelps said.
Another type of visual impairment shows up early in the disease. According to work by Charles Duffy at the University of Rochester Medical Center, New York, people with mild AD become easily confused by objects moving through their visual field. This impairs their ability to navigate through space, as shown by a computerized test (see Mapstone and Duffy, 2010). Degeneration in subcortical structures important for navigation probably underlies the problem, the authors note. They suggest that such tests could be used to screen elderly drivers in the future. The findings fit with other work showing that visual perception falters in early mild cognitive impairment (see Newsome et al., 2012). Phelps noted that in a subset of people with AD, the visual movement deficit seems to be an early sign that precedes obvious cognitive impairment, and may correlate with posterior cortical atrophy (PCA), a variant of AD that affects the visual cortex (see ARF related news story on PCA).
These types of visual assessment, the Digital Clock Drawing Test, and other computer-based tests offer the potential to spot cognitive problems earlier than ever before. But what happens after an older adult receives a diagnosis of mild cognitive impairment, or even AD? For a glimpse into how technology can help monitor the health of impaired seniors and improve clinical trials, see Part 2.—Madolyn Bowman Rogers.
This is Part 1 of a four-part series. See also Part 2, Part 3, Part 4. Read a PDF of the entire series.