The analysis included 231 older adults (aged 65 years and older 43 percent Hispanic and 39 percent Black/African American) with cognitive concerns attending outpatient primary care. Stimmel, Ph.D., from Albert Einstein College of Medicine in New York City, and colleagues examined the utility and discriminative validity of the Spanish and English MoCA versions to identify cognitive impairment among diverse community-dwelling older adults. 10 in the Journal of the American Geriatrics Society. 17, 2024 - The Montreal Cognitive Assessment (MoCA) cutpoints for identifying mild cognitive impairment (MCI) or dementia are inappropriately high in a diverse community setting, yielding a high false-positive rate, according to a study published online Jan. Medically reviewed by Carmen Pope, BPharm. The NHANES program suspended field operations in March 2020 due to the coronavirus disease 2019 (COVID-19) pandemic.Lower Cutoff Points for Montreal Cognitive Assessment Needed Data collected in 2019-March 2020 can be accessed as convenience samples through the NCHS Research Data Center (RDC).Īs a result, data collection for the NHANES 2019-2020 cycle, including the cognitive functioning assessment, was not completed. Any analyses based solely on the 2019-March 2020 data would not be generalizable to the U.S. civilian non-institutionalized population. Please refer to the Analytic Notes section for more details on the use of the data.Ĭognitive functioning (variable name prefix CFQ) testing was performed using the survey-adapted Montreal Cognitive Assessment (MoCA-SA), developed, and administered for the National Social Life, Health and Aging Project (Shega et al.). The MoCA-SA is based on Montreal Cognitive Assessment (MoCA), a multidimensional cognitive screening instrument frequently used by clinical health professionals to detect mild cognitive impairment and Alzheimer's disease (Nasreddine et al.). The MoCA-SA incorporates items from 8 MoCA cognitive subdomains: (1) orientation (2) naming: (3) executive function (4) visuo-construction (5) memory (6) attention (7) language and (8) abstraction. Scores for the MoCA-SA (ranging from 0 to 20) are highly correlated with full MoCA scores (Kotwal et al., Dale et al.). This instrument was selected for NHANES to shorten administration time and respondent burden while preserving the sensitivity of the original MoCA. The data file contains variables that provide scores for each MoCA-SA item. Information obtained during test administration that may be related to cognitive performance (e.g., speech, hearing and visual difficulties) or to the testing environment were coded and included in the data set. Analysts may make their own determinations about which variable responses to use when calculating MoCA-SA scores, or may choose to only analyze scores from individual subdomains. Participants aged 60 years and older, who spoke English or Spanish, in the NHANES 2019-March 2020 convenience sample were eligible. Interview Setting and Mode of Administration Persons requiring a proxy informant, or a language interpreter were not eligible. Questions and tasks were administered at the mobile examination center (MEC), by trained interviewers, using the Computer-Assisted Personal Interview (CAPI) system during the MEC interview session. Interviewers closely followed the administration guidelines of the MoCA.
Survey participants orally agreed to audio record the testing session before the administration of any questions or tasks. Tests were administered by four primary interviewers. After each interview, two interviewers independently assigned points for each item in the assessment according to MoCA scoring criteria (MoCA Version 8.1). Scoring was usually conducted on the same day as the assessment. Interviewers entered itemized scores and verbatim responses into an electronic scoring application and the scores of the two interviewers were compared. Any differences in scores were adjudicated by a reviewer who completed MoCA certification and training.Īpproximately 5% of audio-recorded interviews were independently reviewed over the course of the data collection cycle to evaluate consistency in administration of instructions, to determine accuracy in scoring, and to examine differences in scoring by interviewers. Data Processing and EditingĮdits were made, as necessary, to ensure the completeness, consistency, and analytic usefulness of the data. Recordings of approximately 4% of participants with refusal and don’t know responses were evaluated to ensure the consistent application of response codes. Summary variables with test administration notes including instructions not understood or nonperformance due to a physical impairment were created for some tasks.