Tuesday, November 17, 2015

The Psychology of Anti-Refugee Attitudes, Part 1

As a psychology 101 instructor whose students are largely non-psychology majors, I generally introduce the course objectives by stating, “You are going to forget most of what I teach you in this class.” Although that may sound like a pessimistic way to begin a semester, there is a reason for this statement. Those students that elect not to major in psychology will likely forget about schedules of reinforcement, the functions of the parietal lobe, and the specifics of Piaget’s stages, and that’s ok.  What I expect them to take from my course is an ability to think critically about issues that they will face in the future, whether they be personal, political, or otherwise – a skill that will be useful to them throughout their lives, no matter their chosen career. Throughout the course, we discuss a number of psychological phenomena that affect critical thinking abilities. What follows is a breakdown of a number of these phenomena, using the current debate surrounding the Syrian refugee crisis as a framework. These phenomena include biases in thinking as well as social psychological principles. For the sake of brevity, I will divide this into two posts: the first discussing thinking biases, and the second addressing social psychological concepts.

First, I want to discuss critical thinking briefly. While there are multiple theories regarding stages/levels of critical thinking, the one we focus on in my course was developed by King and Kitchener (2004), who proposed several levels of critical thinking divided into 3 categories:

1)     Pre-reflective thinkers tend to assume that a correct answer always exists and that it can be obtained through the senses or from authorities. So, in thinking about the refugee crisis, pre-reflective thinkers are likely to base their opinions on what they hear from whatever politicians, media, or other “authority figures” believe. Certainly we are all influenced by this, but pre-reflective thinkers take no other steps to think for themselves. They are also uncomfortable with nuance or a lack of certainty, believing a clear solution is always available. These individuals will assume that some action (bombing Syria, putting troops on the ground, impeaching the president, refusing refugees, etc.) will solve the problem of ISIS.

2)     Quasi-reflective thinkers recognize that some things cannot be known with absolute certainty and that judgments should be supported by evidence, yet they pay attention only to evidence that fits what they already believe. On the positive side, they are able to acknowledge that a clear correct solution may not always exist. The issue of ISIS is a perfect example. How can we get rid of them? Should we put boots on the ground and stomp them out, likely being forced to occupy indefinitely to keep the peace? Should we bomb from afar? Should we stay out of the Middle East completely? There are no easy answers here.   On the other hand, quasi-reflective thinkers ignore evidence that goes against their beliefs - another powerful psychological concept referred to as “confirmation bias” – something I am certainly not immune to despite being aware of. Those who do not want to allow Syrian refugees into the U.S. are likely to only pay attention to the information that Syrians are dangerous, only read articles criticizing the Obama administration, and ignore conflicting evidence/perspectives. Thus, a conservative individual may obtain his news only from Fox News Channel, which confirms the individual’s beliefs and results in making them even stronger. Those on the other side are equally likely to pay attention only to evidence that would support the acceptance of Syrian refugees.

3)     Those who use reflective judgment acknowledge that some things can never be known with certainty, but some judgments are more valid than others. These individuals also use dialectical reasoning, which involves considering and comparing opposing points of view in order to resolve differences (essentially what juries are supposed to do in deciding a case). Most people show no evidence of reflective judgment until their middle or late 20s, if ever.

Before we move on, I want to make an additional point about confirmation bias. Have you ever tried arguing with someone about something you both feel strongly about? Have you ever successfully changed someone’s mind on that issue? Probably not, and confirmation bias is one of the main reasons why. Let me tell you about a recent study that explained how this works. Researchers at UCLA divided adults whom were skeptical of the safety of vaccinations into three groups. One group was provided information from the CDC explaining that the Measles, Mumps, & Rubella (MMR) vaccine is safe. The second group read materials that described the dangers of those diseases and viewed images of children with the diseases, as well as information on how vaccines can prevent the diseases. The third group was a control that read a statement unrelated to MMR vaccines. The researchers found that explaining the dangers of the diseases was the only approach that increased support for vaccination - presenting evidence of the safety of vaccines had no effect (Home, Powell, Hummel, & Holyoak, 2015).

That evidence doesn’t change people’s minds is no surprise. Other research has found that not only do people ignore disconfirming evidence, but when people have strong beliefs, such evidence can sometimes serve to make people even more entrenched in their views (Nyhan & Reifler, 2010).

In addition to confirmation bias, we use certain shortcuts, known as heuristics, to help us quickly process information and make decisions (Myers & DeWall, 2014). These heuristics are useful most of the time, but can sometimes lead us astray. One such example is affect heuristic, in which we judge the goodness of a situation based on how it makes us feel. This is sort of like “going with your gut” and can be adaptive in many situations. If you are in a situation and feel frightened, you are likely to try to escape the situation, which could possibly save your life. At a more mundane level, think about how you choose what cereal you are going to buy at the store. You may go through each box, examining the nutrition content and analyzing the taste, texture, price, and smell of each one; however, this would probably waste a lot of time. Instead, you probably just see one that you feel positively about, pull it off of the shelf, and move on with your day.

However, the affect heuristic can have other effects – something media outlets know well. Using fearmongering techniques, media can manipulate our emotions surrounding an issue, which affects how we feel about it. Those who do not want to accept refugees have a strong fear of a terrorist attack and likely Muslims in general. Others have an emotional reaction of “empathy,” which outweighs their fear and influences them to welcome refugees.

The final concept I want to mention is another heuristic called availability heuristic. Basically, we tend to judge the probability of an event occurring based on how easy it is to think of instances of that event. Whenever there is a terrorist attack, we hear about it on the news, making that information very available in our minds. What we do not hear about are stories about Muslim people who are peaceful citizens and not terrorists. That information is not newsworthy. Because it’s easier for us to recall examples of Muslims being terrorists than examples of them being peaceful, we may overestimate the probability of a Muslim refugee being a potential terrorist.

In part 2, I’ll talk about some of the social psychological principles that are influencing people’s perceptions of whether refugees should be welcomed into our country.

References

Home, Z., Powell, D., Hummel, J. E., & Holyoak, K. J. (2015). Countering antivaccination attitudes. PNAS, 112, 10321-10324.

Hyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32, 303-330.

King, P. M., & Kitchener, K. S. (2004). Reflective judgment: Theory and research on the development of epistemic assumptions through adulthood. Educational Psychologist, 39, 5-18.


Myers, D. G., & DeWall, C. N. (2014). Psychology in everyday life. New York: Worth.

Sunday, February 1, 2015

Welcome Back to the Blog!

After quite a long hiatus, Cortex Unfolded is back, and we're excited to get back to work! Sometimes classes, teaching, practicums, and research get in the way, but we now have a renewed commitment to setting aside time to update you all in the latest neuropsychology-related news and research. While we may not post a blog every week (although we'll try), expect to see regular tweets @cortexunfolded. The main focus will remain on topics related to clinical neuropsychology; however, you are also likely to see some occasional posts addressing issues in forensic psychology due to Brian's changed interests. 

Also, we would love to have some guest posts! If you come across or are working on some interesting research or other information that you would like to share with us, let us know, and we'll be happy to post it.

Now, what's the over-under on the amount of concussed players in the Super Bowl whom are allowed to return to the game?  


Sunday, March 9, 2014

A New Trail Making Test

The Trail Making Test (TMT) is an oldie but a goodie. The paper-and-pencil neuropsychological measure consists of two parts: TMT-A is composed of numbers enclosed in circles, and the examinee is asked to simply connect the numbered circles in ascending order as quickly as possible. Part B has both letters and numbers, and examinees must connect "1" to "A," "A" to "2," "2" to "B," and so on. TMT-B has been considered one of the most sensitive indicators of overall brain impairment due to the multiple abilities it taps into (e.g., psychomotor speed, attention, working memory, visual scanning, mental flexibility).

On the other hand, the inclusion of the English alphabet has limited the use of the test to Western populations. Citing this along with evidence that individuals with less-education/MCI do not perform well on the TMT, researchers devised an alternative version of the TMT (Kim, Baek, & Kim, 2014). The TMT-B&W (black and white) replaces the letters of the original with a second, identical set of numbers which are enclosed in black circles instead of white ones. Instead of alternating between numbers and letters, examinees connect a white-circled number with it's black counterpart before moving onto the next number (see below). Part A is similar to the original TMT-A, but with all of the even numbers enclosed in black circles.


(from Kim, et al., 2014)




The authors administered the TMT-B&W and the original TMT (and some other neuropsych measures) to three groups of participants in South Korea, including a control group, patients diagnosed with mild cognitive impairment, and individuals with Alzheimer's Disease. Overall, a higher rate of individuals completed the TMT-B&W relative to the TMT (participants with lower education especially struggled to complete the TMT). Another interesting finding was that TMT-B performance only distinguished the AD group from the control group, while the TMT-B&W was able to distinguish between all three groups (i.e., control, MCI, and AD).





The latter result suggests the new version of the TMT may be more sensitive than the older one. However, it is unclear if this finding is due only to the struggles of the Korean participants to complete the original TMT (the control and MCI groups may have performed equally poorly due to not understanding the English alphabet). One would intuitively think this test would be less sensitive than the original TMT, because it appears easier. Specifically, the working memory demands of the TMT-B&W seem less demanding, given that examinees only have to remember the number that they just connected, rather than having to keep both numbers and letters in mind. I'd be interested in a factor analysis to see if the two measures are tapping into the exact same constructs.

The authors' goal was to provide a version of the TMT that could be used for non-Western and illiterate populations. The TMT-B&W certainly shows promise in this regard, as participants tolerated the test well and correlations with the original TMT and other measures demonstrated evidence of good construct validity. However, participants level of English fluency was not measured or described, so it's impossible to tell how much of a role this played in the study. Even in Western populations, the test may be useful with illiterate/poorly-educated individuals. A comparison of the two versions of the TMT in an English-speaking population would provide further information on whether this is the case.

Kim, H. J., Baek, M. J., & Kim, S. (2014). Alternative type of the Trail Making Test in nonnative English speakers: The Trail Making Test: Black & White. PloS One 9 (2), 1-6. 

Friday, February 7, 2014

Learn More, Care More?

Cognitive reserve has been linked to apathy for the first time, according to a new article in Archives of Clinical Neuropsychology (Shapiro, Mahoney, Peyser, Zingman, & Verghase, 2014). Cognitive reserve refers to the brain’s ability to demonstrate resilience to brain damage that results from problems such as Alzheimer’s Disease. As an example, Individuals with a high amount of cognitive reserve may experience few memory problems despite Alzheimer’s pathology existing in the brain. Factors that may help build cognitive reserve include education, higher intelligence, and regular engagement in both mental and physical activities throughout the lifespan (For those interested in cognitive reserve in general, check out the fascinating video below on the Nun study).


The protective effects of cognitive reserve have generally been examined in relation to neurocognitive domains such as memory. Shapiro and colleagues wanted to see if cognitive reserve would also be protective against the effects of apathy. More specifically, they investigated individuals diagnosed with Human Immunodeficiency Virus (HIV), a disorder which is often associated with neurocognitive and neuropsychiatric symptoms including apathy. Cognitive reserve was measured through a composite score consisting of participants’ highest level of educational attainment and scores on the Wechsler Test of Adult Reading (WTAR), while apathy was measured using a brief self-report measure. Researchers accounted for possible confounding variables such as age, gender, disease duration, markers of disease severity, and scores on the Beck Depression Inventory.

31% of participants demonstrated clinically significant apathy based on the self-report measure. The authors stated that cognitive reserve significantly predicted apathy overall (p = .02), but the method section indicated that an alpha level of .01 was used for all analyses. Therefore, this main effect should not have been significant. In any case, there was a significant interaction between cognitive reserve and a marker of the stage of advancement of the disease (nadir CD4 counts). Specifically, individuals with greater cognitive reserve experienced less apathy than those with lower amounts of it, but only for those participants who were in a later stage of HIV infection (p < .001). For those participants in an earlier stage of infection, cognitive reserve did not significantly predict apathy. The authors hypothesize that this protective effect against apathy is a result of “more efficient neural processing and more effective compensation.”



This paper is interesting in that it was able to find a link between cognitive reserve and apathy possibly for the first time; however, this link was found only in individuals with more advanced HIV. It’s not clear why this was the case. The study also has some limitations which were noted by the researchers, including the lack of a healthy control group and an inability to determine causality. Another possible problem that is not addressed is the fact that the overall mean word reading test scores fell in the borderline range. Therefore, these results may not generalize to the general population in which the mean should fall in the average range.

Shapiro, M. E., Mahoney, J. R., Peyser, D., Zingman, B. S., & Verghese, J. (2014). Cognitive reserve protects against apathy in individuals with Human Immunodeficiency Virus. Archives of Clinical Neuropsychology, 29, 110-120.

Monday, January 27, 2014

DSM-5 Neurocognitive Disorders and Implications for Forensic Evaluations

As you’re probably well-aware, the latest edition of the DSM has been met with much criticism.  Concerns of over-diagnosis have been loudly expressed by individuals such as Allen Frances, M.D.  Despite receiving less attention than some other categories (i.e., personality and neurodevelopmental disorders), the changes to the diagnostic criteria for neurocognitive disorders have not escaped controversy. 

An article by Izabela Z. Schultz (2013) in Psychological Injury and Law discussed how the updated criteria for neurocognitive disorders may affect forensic situations.  Even with the focus on forensic applications, many of the concerns she raises apply to any type of neuropsychological evaluation (it should be noted that she also has positive things to say about the changes; I am focusing on her criticisms here).  Below I outline and respond to some of these concerns. 

In DSM-5, Major and Minor Neurocognitive Disorder diagnoses depend on the presence (or absence) of both a decline from previous functioning and inference with independence in activities of daily living.  The former consists of both concern from an individual (i.e., client, clinician, or family member) along with objective cognitive impairment, generally determined by neuropsychological testing.  Schultz cites problems with the assessment of ADLs and with the use of neuropsychological tests in this context.  First, she claims that psychologists may not have the skills to assess ADLs, given that this is primarily the domain of occupational therapists.  Although OTs specialize in ADL assessment, I’m confident psychologists are also capable of assessing ADLs through interviews and measures such as the Texas Functional Living Scale.

Schultz’s complaints about the use of neuropsychological testing for the purposes of determining cognitive impairment include:

1.      Arbitrary cut-off values (in standard deviations from the mean) may result in overdiagnosis of mild neurocognitive disorder and underdiagnosis of major neurocognitive disorder.
2.      Neuropsychological measures may have psychometric problems and be prone to biases, errors, and limitations with regard to certain populations who experience barriers to assessment. 
3.      Neuropsychologists administer several tests, and there is no standard rule for how to determine the overall level of impairment when test scores vary.  In forensic settings, this could lead to consciously or unconsciously-biased diagnostic decision making.
4.      DSM-5 does not stress a multi-method approach involving qualitative methods in addition to quantitative ones.

Related to Schultz’s first point is another criticism: lack of a “moderate neurocognitive disorder” diagnosis.  Schultz is concerned that in forensic settings, individuals who have serious functional problems may only be diagnosed with the mild disorder, which could detrimentally affect case outcomes.  She raises a valid point when she discusses the fact that DSM-5 acknowledges mild, moderate, and severe TBI when there is no option to diagnose a moderate neurocognitive disorder.  On the other hand, the cut-off values for establishing cognitive impairment (1 or 2 standard deviations for minor and major neurocognitive disorder respectively), are indeed somewhat arbitrary.  Adding a moderate diagnosis would reduce the range between the cut-off values, likely blurring the lines between the disorders even more.  Despite there being no option to diagnose a moderate neurocognitive disorder, I am not overly concerned about overdiagnosis of the mild form.  These diagnoses are not based solely on neuropsychological test scores – they also take decline from previous functioning and independence in ADLs into account.  Therefore, someone will not be diagnosed with a disorder based solely on scoring a standard deviation below the mean on a neuropsychological measure.

Schultz’s second and fourth points above appear to be less a problem with diagnostic criteria and more of an issue of neuropsychologists’ ethics.  Neuropsychologists are ethically obligated to choose tests that are valid for the purpose they are testing and for the individuals whom they are testing.  Therefore, the responsibility is on the psychologist to consider the psychometric properties of each measure.  Similarly, DSM-5’s lack of emphasis on qualitative assessment methods is not a problem.  Any competent psychologist knows that test scores must be interpreted in light of other factors such as premorbid functioning, clinical interview data, medical records, etc.

The third potential issue with neuropsychological testing could certainly bring about some problems.  It appears to be left up to individual clinicians to determine how to judge overall severity when multiple measures are administered with varying results.  For example, should an individual with one test that is two standard deviations below the mean, with the rest being just one standard deviation below the mean be diagnosed with major or minor neurocognitive disorder?  A standard rule does seem to be indicated here.  At the least, neuropsychologists should apply their own rule and use it consistently to avoid bias in forensic situations.

Another point of contention to the DSM-5 changes brought up in the article is the choice of specifiers for individual sources of cognitive impairment.  Schultz points out that rare diseases such as prion disease are included, while other sources such as Multiple Sclerosis and electrical injury are left out, relegated to the “due to another medical condition” specifier.  In forensic settings, it seems that use of “due to another medical condition” specifier shouldn’t be a problem, howeverm as the psychologist could discuss what that medical condition is probably causing the problem (e.g., an electrical injury).  Admittedly, I’m not a forensic psychologist, so maybe someone well-versed in that field could clear up this question. 

Finally, Schultz mentions that ADLs are considered in the diagnostic criteria, but not other domains of impairment, such as vocational or social impairment.  I agree that these should be part of the criteria, especially considering that functional impairment can be independent of other indicators such as neuropsychological test scores.  As an example, an individual with a 1 standard deviation decline in cognitive functioning due to a TBI may experience more functional problems if that individual has a cognitively demanding job as opposed to a job that requires only manual labor. 

Overall, I believe this category is one of the few which improved with the new edition of the DSM.  However, only time will tell how the updated criteria will affect neuropsychological evaluations in forensic and other settings. 

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.


Schultz, I. Z. (2013). DSM-5 neurocognitive disorders: Validity, reliability, fairness, and utility in forensic applications. Psychological Injury and Law, 6, 299-306.

Sunday, January 12, 2014

Can Neuropsychology Detect Psychosis Before it Ever Occurs?

Over the last several years, interest in detecting schizophrenia (and other psychotic disorders) in its prodromal phase (before the appearance of any psychotic symptoms have manifested) has been rapidly increasing. Detecting one’s vulnerability to acute psychosis before suffering a first event has many theoretical and practical advantages. Evidence suggests that the length and severity of untreated psychosis is directly related to an individual’s long-term outcome. If psychosis can be delayed, reduced, or ideally, altogether prevented by targeted interventions, the morbidity of the illness should be greatly improved (Eastvold, Heaton, & Cadenhead, 2007; Hawkins et al., 2004).
  
In the past, attempts at diagnosing prodromal psychosis have relied upon criteria that included transient psychotic symptoms, and/or a family history of psychosis combined with a marked decline in functioning; however, this method is marred by a 50% or higher false positive rating in most studies, and most people identified as at risk do not go on to develop psychosis, at least not over the duration of the studies that are monitoring them (Haroun, Dunn, Haroun, & Cadenhead, 2006). This high rate of false positives and low rate of conversions to psychotic states raises ethical concerns about if and how to treat those identified as having a high risk of developing a psychotic illness (Haroun, Dunn, Haroun, & Cadenhead, 2006).
  
The above has led to a desire to develop a more accurate way of identifying those most at risk of having a psychotic episode. The neurodevelopmental model provides strong evidence that cognitive abnormalities related to abnormal brain maturation is a core feature of psychotic illnesses (Lencz et al., 2006; Eastvold, Heaton, & Cadenhead, 2007). These neurocognitive deficits are well established in schizophrenia, span multiple domains, and include motor abilities, executive functions, language, general intelligence, learning/memory, and spatial abilities (Eastvold, Heaton, & Cadenhead, 2007).
  
Similar neurocognitive dysfunction, especially those related to visuospatial processing impairment and working memory deficits, has been detected in first degree relatives of family members of psychotic patients and those with schizotypal disorder who are known to be at greater risk of developing psychosis (Brewer et al, 2005). Because these neurocognitive features are present prior to the onset of any psychotic symptoms, they may be a trait marker for schizophrenia. This has raised the very interesting possibility of identifying those at high risk of developing psychosis by using neuropsychological assessments such as the Wechsler Memory Scales-R (WMS-R), the Wisconsin Card Sorting Test, the Vocabulary and Block Design subtests of the WAIS IV, and others (Brewer et al, 2005; Eastvold, Heaton, & Cadenhead, 2007; Lencz et al., 2006).
  
These findings represent very exciting developments in the field of psychological assessment. While the current research is very promising, most studies call for continued research with larger cohorts to further define and validate the neurocognitive profile that best indicates future psychotic symptoms, as well as to validate the cognitive deficits as a true trait marker for psychotic vulnerability.
  
Another problem to be addressed is the high degree of variability presently found within the composition of the neuropsychological batteries being tested for detecting prodromal psychosis by various researchers. This is part of the process of developing a standardized test battery, but presently we are without any general consensus about which tests to use and what results to look for, causing the utility of neuropsychological assessments as an identifier for those at risk of psychosis to remain quite limited.
  
I find the prospect of being able to assess for prodromal psychosis to be tantalizing. As a student of clinical psychology, this could potentially be an important part of my future. I think that being able to reliability detect psychotic disorders before they strike would be a wonderful development for clients, their families, and the field of clinical psychology. It could potentially prevent much personal and economic devastation; it is noninvasive, relatively brief to administer (at least in regard to the enormity of possibly preventing disorders as severe as these from manifesting), and is potentially much more reliable than what is presently available.
  
Of course, even with all of the obvious benefits, that are certainly potential problems to address. Aside from further validating the neurocognitive trait marker and assessment batteries, I can see ethical concerns looming as perhaps the largest potential challenge. Determining who should be screened, when they should be screened, and who pays for said screening are among the first concerns. What will the threshold be for a positive screening, and what, if anything, should be done for those deemed at only moderate risk for developing a psychotic state are all questions in need of answers.
  
Perhaps the biggest ethical dilemma will be reserved for those that are deemed to be highly likely to develop a psychotic illness. Presently, most interventions for psychosis involve antipsychotic medications, which come with a host of unpleasant and potentially dangerous side effects (although in this regard they are little different from other pharmaceuticals). Would it be ethical to recommend a client be prescribed these drugs in perpetuity if one was not 100% sure they were needed? The line between pragmatic risk prevention and unnecessary risk is a thin one indeed, and our accuracy in detecting prodromal psychosis will need to improve before we are informed enough to make such profound decisions.
  
It also seems likely that psychologists will face issues informing client’s that they have been found likely to develop psychosis. This knowledge, like the genetic knowledge that one will almost certainly develop cancer, could be very beneficial in reducing morbidity, but it is not without considerable stress. Knowing one’s vulnerability to psychosis could have both long and short term negative effects on one’s psychological health. It is even possible that the sheer anxiety caused by this information could increase the chances of a vulnerable person having a first psychotic event, and/or increase the chances of them developing other mental illnesses. This problem is magnified if we do not have any palatable and efficacious interventions to offer them at the time the prognosis is delivered.
  
Ultimately, I think that using neuropsychological assessments to detect prodromal psychotic disorders will become a valued part of what clinical neuropsychologists provide. Before that can happen, it is paramount that the neurocognitive deficits being assessed are validated and then quantified in such a way that they can very reliably predict the future onset of psychotic illness. The ethical concerns described above will also need to be addressed before any widespread adoption of a neuropsychological battery used for the detection of potential psychotic illnesses should occur. Finally, while I feel that the development of these test batteries should proceed as rapidly as possible, the benefits of detection will be blunted by our present lack of quality preventative interventions. In some cases, early detection could cause potentially damaging anxiety, precisely because of our present dearth of safe and proven options for preventing psychosis from developing. This downside is likely outweighed by the fact that at the very least, those identified as at high risk could be carefully monitored and provided with proper medications at the very first signs of an initial psychotic break, greatly improving their long-term outcomes.  

References

Brewer, W. J., Francey, S. M., Wood, S. J., Jackson, H. J., Pantelis, C., Phillips, L. J., ... & McGorry, P. D. (2005). Memory impairments identified in people at ultra-high risk for psychosis who later develop first-episode psychosis.American Journal of Psychiatry, 162, 71-78.

Eastvold, A. D., Heaton, R. K., & Cadenhead, K. S. (2007). Neurocognitive deficits in the (putative) prodrome and first episode of psychosis. Schizophrenia research, 93, 266-277.

Haroun, N., Dunn, L., Haroun, A., & Cadenhead, K. S. (2006). Risk and protection in prodromal schizophrenia: ethical implications for clinical practice and future research. Schizophrenia bulletin, 32, 166-178.

Hawkins, K. A., Addington, J., Keefe, R. S. E., Christensen, B., Perkins, D. O., Zipurksy, R., ... & McGlashan, T. H. (2004). Neuropsychological status of subjects at high risk for a first episode of psychosis. Schizophrenia research,67, 115-122.

Lencz, T., Smith, C. W., McLaughlin, D., Auther, A., Nakayama, E., Hovey, L., & Cornblatt, B. A. (2006). Generalized and specific neurocognitive deficits in prodromal schizophrenia. Biological psychiatry, 59, 863-871.

Sunday, January 5, 2014

Toward a Modern Neuropsychology

Are neuropsychology’s current syndromes and assessment measures out-dated?  Ardila (2013) argues that this may be the case in his commentary in the journal Archives of Clinical Neuropsychology.  Ardila contends that some classic neuropsychological syndromes (e.g., aphasia, alexia, prosopagnosia) may need updated, considering the changes in technological and social conditions that have taken place over the last 100 years.  Because of these changes, tasks are often performed in different ways then they used to be, thus potentially requiring the use of different brain areas.  The author specifically refers to cognitive abilities including spoken language, written language, numerical abilities, spatial orientation, people recognition, memory, and executive functions.

Written language is one of the more interesting areas addressed in this article.  Much writing is now done using a keyboard and word processor rather than pen-and-pencil.  Although an understanding of language is necessary for both, typing and handwriting require differing demands from the individual.  For example, handwriting requires individuals to construct letters using fine motor skills, while also keeping the letters spaced properly.  Typing, on the other hand, merely requires the press of buttons rather than construction of letters; however, typists must use both hands to type letters in the correct order.  To do this quickly and accurately, a well-functioning corpus callosum is needed to facilitate communication between the two cerebral hemispheres.  The argument is that because modern written language involves much typing, we need to learn more about the brain structures involved (as opposed to those involved in handwriting) and develop new ways to assess for impairment in typing ability that is due to brain dysfunction.  This same concept applies to the other functional domains addressed in the article.  For example, societal changes over time may affect how our brain is organized for the recognition of people, as we are now compelled to remember more faces than we were when neuropsychology was a new field due to exposure to television, the internet, and other media. 

The author concludes that our century-old neuropsychological syndromes (and measures used to test for them) need re-assessed.  In any field, it is important to take time to examine practices to see if they are out-dated or could be improved upon, and neuropsychology is no different.  On the other hand, is it necessary to declare the existence of new syndromes based on modern tasks that did not exist previously?  Ardila suggests one such new syndrome could be “acomputeria” – an inability to use computers.  He goes on to list potential specific subtypes of acomputeria.  I wonder if a better solution would be to classify problems based on their underlying causes, rather than inventing a new syndrome for each new modern task.  For example, one’s inability to use a computer may not be a syndrome in and of itself, but the result of memory problems (the individual is unable to learn new information), executive dysfunction (the individual may be unable to problem solve and continue to perform unsuccessful actions), or something else.  In this case, traditional neuropsychological measures may be effective in determining the underlying problem.  Either way, the issues addressed in the commentary are certainly worth thinking about further, and, most importantly, testing empirically.

Here is a link to the abstract: Ardila 2013

Ardila, A. (2013). A new neuropsychology for the XXI century. Archives of Clinical Neuropsychology, 28, 751-762.