Sunday, March 9, 2014

A New Trail Making Test

The Trail Making Test (TMT) is an oldie but a goodie. The paper-and-pencil neuropsychological measure consists of two parts: TMT-A is composed of numbers enclosed in circles, and the examinee is asked to simply connect the numbered circles in ascending order as quickly as possible. Part B has both letters and numbers, and examinees must connect "1" to "A," "A" to "2," "2" to "B," and so on. TMT-B has been considered one of the most sensitive indicators of overall brain impairment due to the multiple abilities it taps into (e.g., psychomotor speed, attention, working memory, visual scanning, mental flexibility).

On the other hand, the inclusion of the English alphabet has limited the use of the test to Western populations. Citing this along with evidence that individuals with less-education/MCI do not perform well on the TMT, researchers devised an alternative version of the TMT (Kim, Baek, & Kim, 2014). The TMT-B&W (black and white) replaces the letters of the original with a second, identical set of numbers which are enclosed in black circles instead of white ones. Instead of alternating between numbers and letters, examinees connect a white-circled number with it's black counterpart before moving onto the next number (see below). Part A is similar to the original TMT-A, but with all of the even numbers enclosed in black circles.


(from Kim, et al., 2014)




The authors administered the TMT-B&W and the original TMT (and some other neuropsych measures) to three groups of participants in South Korea, including a control group, patients diagnosed with mild cognitive impairment, and individuals with Alzheimer's Disease. Overall, a higher rate of individuals completed the TMT-B&W relative to the TMT (participants with lower education especially struggled to complete the TMT). Another interesting finding was that TMT-B performance only distinguished the AD group from the control group, while the TMT-B&W was able to distinguish between all three groups (i.e., control, MCI, and AD).





The latter result suggests the new version of the TMT may be more sensitive than the older one. However, it is unclear if this finding is due only to the struggles of the Korean participants to complete the original TMT (the control and MCI groups may have performed equally poorly due to not understanding the English alphabet). One would intuitively think this test would be less sensitive than the original TMT, because it appears easier. Specifically, the working memory demands of the TMT-B&W seem less demanding, given that examinees only have to remember the number that they just connected, rather than having to keep both numbers and letters in mind. I'd be interested in a factor analysis to see if the two measures are tapping into the exact same constructs.

The authors' goal was to provide a version of the TMT that could be used for non-Western and illiterate populations. The TMT-B&W certainly shows promise in this regard, as participants tolerated the test well and correlations with the original TMT and other measures demonstrated evidence of good construct validity. However, participants level of English fluency was not measured or described, so it's impossible to tell how much of a role this played in the study. Even in Western populations, the test may be useful with illiterate/poorly-educated individuals. A comparison of the two versions of the TMT in an English-speaking population would provide further information on whether this is the case.

Kim, H. J., Baek, M. J., & Kim, S. (2014). Alternative type of the Trail Making Test in nonnative English speakers: The Trail Making Test: Black & White. PloS One 9 (2), 1-6. 

Friday, February 7, 2014

Learn More, Care More?

Cognitive reserve has been linked to apathy for the first time, according to a new article in Archives of Clinical Neuropsychology (Shapiro, Mahoney, Peyser, Zingman, & Verghase, 2014). Cognitive reserve refers to the brain’s ability to demonstrate resilience to brain damage that results from problems such as Alzheimer’s Disease. As an example, Individuals with a high amount of cognitive reserve may experience few memory problems despite Alzheimer’s pathology existing in the brain. Factors that may help build cognitive reserve include education, higher intelligence, and regular engagement in both mental and physical activities throughout the lifespan (For those interested in cognitive reserve in general, check out the fascinating video below on the Nun study).


The protective effects of cognitive reserve have generally been examined in relation to neurocognitive domains such as memory. Shapiro and colleagues wanted to see if cognitive reserve would also be protective against the effects of apathy. More specifically, they investigated individuals diagnosed with Human Immunodeficiency Virus (HIV), a disorder which is often associated with neurocognitive and neuropsychiatric symptoms including apathy. Cognitive reserve was measured through a composite score consisting of participants’ highest level of educational attainment and scores on the Wechsler Test of Adult Reading (WTAR), while apathy was measured using a brief self-report measure. Researchers accounted for possible confounding variables such as age, gender, disease duration, markers of disease severity, and scores on the Beck Depression Inventory.

31% of participants demonstrated clinically significant apathy based on the self-report measure. The authors stated that cognitive reserve significantly predicted apathy overall (p = .02), but the method section indicated that an alpha level of .01 was used for all analyses. Therefore, this main effect should not have been significant. In any case, there was a significant interaction between cognitive reserve and a marker of the stage of advancement of the disease (nadir CD4 counts). Specifically, individuals with greater cognitive reserve experienced less apathy than those with lower amounts of it, but only for those participants who were in a later stage of HIV infection (p < .001). For those participants in an earlier stage of infection, cognitive reserve did not significantly predict apathy. The authors hypothesize that this protective effect against apathy is a result of “more efficient neural processing and more effective compensation.”



This paper is interesting in that it was able to find a link between cognitive reserve and apathy possibly for the first time; however, this link was found only in individuals with more advanced HIV. It’s not clear why this was the case. The study also has some limitations which were noted by the researchers, including the lack of a healthy control group and an inability to determine causality. Another possible problem that is not addressed is the fact that the overall mean word reading test scores fell in the borderline range. Therefore, these results may not generalize to the general population in which the mean should fall in the average range.

Shapiro, M. E., Mahoney, J. R., Peyser, D., Zingman, B. S., & Verghese, J. (2014). Cognitive reserve protects against apathy in individuals with Human Immunodeficiency Virus. Archives of Clinical Neuropsychology, 29, 110-120.

Monday, January 27, 2014

DSM-5 Neurocognitive Disorders and Implications for Forensic Evaluations

As you’re probably well-aware, the latest edition of the DSM has been met with much criticism.  Concerns of over-diagnosis have been loudly expressed by individuals such as Allen Frances, M.D.  Despite receiving less attention than some other categories (i.e., personality and neurodevelopmental disorders), the changes to the diagnostic criteria for neurocognitive disorders have not escaped controversy. 

An article by Izabela Z. Schultz (2013) in Psychological Injury and Law discussed how the updated criteria for neurocognitive disorders may affect forensic situations.  Even with the focus on forensic applications, many of the concerns she raises apply to any type of neuropsychological evaluation (it should be noted that she also has positive things to say about the changes; I am focusing on her criticisms here).  Below I outline and respond to some of these concerns. 

In DSM-5, Major and Minor Neurocognitive Disorder diagnoses depend on the presence (or absence) of both a decline from previous functioning and inference with independence in activities of daily living.  The former consists of both concern from an individual (i.e., client, clinician, or family member) along with objective cognitive impairment, generally determined by neuropsychological testing.  Schultz cites problems with the assessment of ADLs and with the use of neuropsychological tests in this context.  First, she claims that psychologists may not have the skills to assess ADLs, given that this is primarily the domain of occupational therapists.  Although OTs specialize in ADL assessment, I’m confident psychologists are also capable of assessing ADLs through interviews and measures such as the Texas Functional Living Scale.

Schultz’s complaints about the use of neuropsychological testing for the purposes of determining cognitive impairment include:

1.      Arbitrary cut-off values (in standard deviations from the mean) may result in overdiagnosis of mild neurocognitive disorder and underdiagnosis of major neurocognitive disorder.
2.      Neuropsychological measures may have psychometric problems and be prone to biases, errors, and limitations with regard to certain populations who experience barriers to assessment. 
3.      Neuropsychologists administer several tests, and there is no standard rule for how to determine the overall level of impairment when test scores vary.  In forensic settings, this could lead to consciously or unconsciously-biased diagnostic decision making.
4.      DSM-5 does not stress a multi-method approach involving qualitative methods in addition to quantitative ones.

Related to Schultz’s first point is another criticism: lack of a “moderate neurocognitive disorder” diagnosis.  Schultz is concerned that in forensic settings, individuals who have serious functional problems may only be diagnosed with the mild disorder, which could detrimentally affect case outcomes.  She raises a valid point when she discusses the fact that DSM-5 acknowledges mild, moderate, and severe TBI when there is no option to diagnose a moderate neurocognitive disorder.  On the other hand, the cut-off values for establishing cognitive impairment (1 or 2 standard deviations for minor and major neurocognitive disorder respectively), are indeed somewhat arbitrary.  Adding a moderate diagnosis would reduce the range between the cut-off values, likely blurring the lines between the disorders even more.  Despite there being no option to diagnose a moderate neurocognitive disorder, I am not overly concerned about overdiagnosis of the mild form.  These diagnoses are not based solely on neuropsychological test scores – they also take decline from previous functioning and independence in ADLs into account.  Therefore, someone will not be diagnosed with a disorder based solely on scoring a standard deviation below the mean on a neuropsychological measure.

Schultz’s second and fourth points above appear to be less a problem with diagnostic criteria and more of an issue of neuropsychologists’ ethics.  Neuropsychologists are ethically obligated to choose tests that are valid for the purpose they are testing and for the individuals whom they are testing.  Therefore, the responsibility is on the psychologist to consider the psychometric properties of each measure.  Similarly, DSM-5’s lack of emphasis on qualitative assessment methods is not a problem.  Any competent psychologist knows that test scores must be interpreted in light of other factors such as premorbid functioning, clinical interview data, medical records, etc.

The third potential issue with neuropsychological testing could certainly bring about some problems.  It appears to be left up to individual clinicians to determine how to judge overall severity when multiple measures are administered with varying results.  For example, should an individual with one test that is two standard deviations below the mean, with the rest being just one standard deviation below the mean be diagnosed with major or minor neurocognitive disorder?  A standard rule does seem to be indicated here.  At the least, neuropsychologists should apply their own rule and use it consistently to avoid bias in forensic situations.

Another point of contention to the DSM-5 changes brought up in the article is the choice of specifiers for individual sources of cognitive impairment.  Schultz points out that rare diseases such as prion disease are included, while other sources such as Multiple Sclerosis and electrical injury are left out, relegated to the “due to another medical condition” specifier.  In forensic settings, it seems that use of “due to another medical condition” specifier shouldn’t be a problem, howeverm as the psychologist could discuss what that medical condition is probably causing the problem (e.g., an electrical injury).  Admittedly, I’m not a forensic psychologist, so maybe someone well-versed in that field could clear up this question. 

Finally, Schultz mentions that ADLs are considered in the diagnostic criteria, but not other domains of impairment, such as vocational or social impairment.  I agree that these should be part of the criteria, especially considering that functional impairment can be independent of other indicators such as neuropsychological test scores.  As an example, an individual with a 1 standard deviation decline in cognitive functioning due to a TBI may experience more functional problems if that individual has a cognitively demanding job as opposed to a job that requires only manual labor. 

Overall, I believe this category is one of the few which improved with the new edition of the DSM.  However, only time will tell how the updated criteria will affect neuropsychological evaluations in forensic and other settings. 

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.


Schultz, I. Z. (2013). DSM-5 neurocognitive disorders: Validity, reliability, fairness, and utility in forensic applications. Psychological Injury and Law, 6, 299-306.

Sunday, January 12, 2014

Can Neuropsychology Detect Psychosis Before it Ever Occurs?

Over the last several years, interest in detecting schizophrenia (and other psychotic disorders) in its prodromal phase (before the appearance of any psychotic symptoms have manifested) has been rapidly increasing. Detecting one’s vulnerability to acute psychosis before suffering a first event has many theoretical and practical advantages. Evidence suggests that the length and severity of untreated psychosis is directly related to an individual’s long-term outcome. If psychosis can be delayed, reduced, or ideally, altogether prevented by targeted interventions, the morbidity of the illness should be greatly improved (Eastvold, Heaton, & Cadenhead, 2007; Hawkins et al., 2004).
  
In the past, attempts at diagnosing prodromal psychosis have relied upon criteria that included transient psychotic symptoms, and/or a family history of psychosis combined with a marked decline in functioning; however, this method is marred by a 50% or higher false positive rating in most studies, and most people identified as at risk do not go on to develop psychosis, at least not over the duration of the studies that are monitoring them (Haroun, Dunn, Haroun, & Cadenhead, 2006). This high rate of false positives and low rate of conversions to psychotic states raises ethical concerns about if and how to treat those identified as having a high risk of developing a psychotic illness (Haroun, Dunn, Haroun, & Cadenhead, 2006).
  
The above has led to a desire to develop a more accurate way of identifying those most at risk of having a psychotic episode. The neurodevelopmental model provides strong evidence that cognitive abnormalities related to abnormal brain maturation is a core feature of psychotic illnesses (Lencz et al., 2006; Eastvold, Heaton, & Cadenhead, 2007). These neurocognitive deficits are well established in schizophrenia, span multiple domains, and include motor abilities, executive functions, language, general intelligence, learning/memory, and spatial abilities (Eastvold, Heaton, & Cadenhead, 2007).
  
Similar neurocognitive dysfunction, especially those related to visuospatial processing impairment and working memory deficits, has been detected in first degree relatives of family members of psychotic patients and those with schizotypal disorder who are known to be at greater risk of developing psychosis (Brewer et al, 2005). Because these neurocognitive features are present prior to the onset of any psychotic symptoms, they may be a trait marker for schizophrenia. This has raised the very interesting possibility of identifying those at high risk of developing psychosis by using neuropsychological assessments such as the Wechsler Memory Scales-R (WMS-R), the Wisconsin Card Sorting Test, the Vocabulary and Block Design subtests of the WAIS IV, and others (Brewer et al, 2005; Eastvold, Heaton, & Cadenhead, 2007; Lencz et al., 2006).
  
These findings represent very exciting developments in the field of psychological assessment. While the current research is very promising, most studies call for continued research with larger cohorts to further define and validate the neurocognitive profile that best indicates future psychotic symptoms, as well as to validate the cognitive deficits as a true trait marker for psychotic vulnerability.
  
Another problem to be addressed is the high degree of variability presently found within the composition of the neuropsychological batteries being tested for detecting prodromal psychosis by various researchers. This is part of the process of developing a standardized test battery, but presently we are without any general consensus about which tests to use and what results to look for, causing the utility of neuropsychological assessments as an identifier for those at risk of psychosis to remain quite limited.
  
I find the prospect of being able to assess for prodromal psychosis to be tantalizing. As a student of clinical psychology, this could potentially be an important part of my future. I think that being able to reliability detect psychotic disorders before they strike would be a wonderful development for clients, their families, and the field of clinical psychology. It could potentially prevent much personal and economic devastation; it is noninvasive, relatively brief to administer (at least in regard to the enormity of possibly preventing disorders as severe as these from manifesting), and is potentially much more reliable than what is presently available.
  
Of course, even with all of the obvious benefits, that are certainly potential problems to address. Aside from further validating the neurocognitive trait marker and assessment batteries, I can see ethical concerns looming as perhaps the largest potential challenge. Determining who should be screened, when they should be screened, and who pays for said screening are among the first concerns. What will the threshold be for a positive screening, and what, if anything, should be done for those deemed at only moderate risk for developing a psychotic state are all questions in need of answers.
  
Perhaps the biggest ethical dilemma will be reserved for those that are deemed to be highly likely to develop a psychotic illness. Presently, most interventions for psychosis involve antipsychotic medications, which come with a host of unpleasant and potentially dangerous side effects (although in this regard they are little different from other pharmaceuticals). Would it be ethical to recommend a client be prescribed these drugs in perpetuity if one was not 100% sure they were needed? The line between pragmatic risk prevention and unnecessary risk is a thin one indeed, and our accuracy in detecting prodromal psychosis will need to improve before we are informed enough to make such profound decisions.
  
It also seems likely that psychologists will face issues informing client’s that they have been found likely to develop psychosis. This knowledge, like the genetic knowledge that one will almost certainly develop cancer, could be very beneficial in reducing morbidity, but it is not without considerable stress. Knowing one’s vulnerability to psychosis could have both long and short term negative effects on one’s psychological health. It is even possible that the sheer anxiety caused by this information could increase the chances of a vulnerable person having a first psychotic event, and/or increase the chances of them developing other mental illnesses. This problem is magnified if we do not have any palatable and efficacious interventions to offer them at the time the prognosis is delivered.
  
Ultimately, I think that using neuropsychological assessments to detect prodromal psychotic disorders will become a valued part of what clinical neuropsychologists provide. Before that can happen, it is paramount that the neurocognitive deficits being assessed are validated and then quantified in such a way that they can very reliably predict the future onset of psychotic illness. The ethical concerns described above will also need to be addressed before any widespread adoption of a neuropsychological battery used for the detection of potential psychotic illnesses should occur. Finally, while I feel that the development of these test batteries should proceed as rapidly as possible, the benefits of detection will be blunted by our present lack of quality preventative interventions. In some cases, early detection could cause potentially damaging anxiety, precisely because of our present dearth of safe and proven options for preventing psychosis from developing. This downside is likely outweighed by the fact that at the very least, those identified as at high risk could be carefully monitored and provided with proper medications at the very first signs of an initial psychotic break, greatly improving their long-term outcomes.  

References

Brewer, W. J., Francey, S. M., Wood, S. J., Jackson, H. J., Pantelis, C., Phillips, L. J., ... & McGorry, P. D. (2005). Memory impairments identified in people at ultra-high risk for psychosis who later develop first-episode psychosis.American Journal of Psychiatry, 162, 71-78.

Eastvold, A. D., Heaton, R. K., & Cadenhead, K. S. (2007). Neurocognitive deficits in the (putative) prodrome and first episode of psychosis. Schizophrenia research, 93, 266-277.

Haroun, N., Dunn, L., Haroun, A., & Cadenhead, K. S. (2006). Risk and protection in prodromal schizophrenia: ethical implications for clinical practice and future research. Schizophrenia bulletin, 32, 166-178.

Hawkins, K. A., Addington, J., Keefe, R. S. E., Christensen, B., Perkins, D. O., Zipurksy, R., ... & McGlashan, T. H. (2004). Neuropsychological status of subjects at high risk for a first episode of psychosis. Schizophrenia research,67, 115-122.

Lencz, T., Smith, C. W., McLaughlin, D., Auther, A., Nakayama, E., Hovey, L., & Cornblatt, B. A. (2006). Generalized and specific neurocognitive deficits in prodromal schizophrenia. Biological psychiatry, 59, 863-871.

Sunday, January 5, 2014

Toward a Modern Neuropsychology

Are neuropsychology’s current syndromes and assessment measures out-dated?  Ardila (2013) argues that this may be the case in his commentary in the journal Archives of Clinical Neuropsychology.  Ardila contends that some classic neuropsychological syndromes (e.g., aphasia, alexia, prosopagnosia) may need updated, considering the changes in technological and social conditions that have taken place over the last 100 years.  Because of these changes, tasks are often performed in different ways then they used to be, thus potentially requiring the use of different brain areas.  The author specifically refers to cognitive abilities including spoken language, written language, numerical abilities, spatial orientation, people recognition, memory, and executive functions.

Written language is one of the more interesting areas addressed in this article.  Much writing is now done using a keyboard and word processor rather than pen-and-pencil.  Although an understanding of language is necessary for both, typing and handwriting require differing demands from the individual.  For example, handwriting requires individuals to construct letters using fine motor skills, while also keeping the letters spaced properly.  Typing, on the other hand, merely requires the press of buttons rather than construction of letters; however, typists must use both hands to type letters in the correct order.  To do this quickly and accurately, a well-functioning corpus callosum is needed to facilitate communication between the two cerebral hemispheres.  The argument is that because modern written language involves much typing, we need to learn more about the brain structures involved (as opposed to those involved in handwriting) and develop new ways to assess for impairment in typing ability that is due to brain dysfunction.  This same concept applies to the other functional domains addressed in the article.  For example, societal changes over time may affect how our brain is organized for the recognition of people, as we are now compelled to remember more faces than we were when neuropsychology was a new field due to exposure to television, the internet, and other media. 

The author concludes that our century-old neuropsychological syndromes (and measures used to test for them) need re-assessed.  In any field, it is important to take time to examine practices to see if they are out-dated or could be improved upon, and neuropsychology is no different.  On the other hand, is it necessary to declare the existence of new syndromes based on modern tasks that did not exist previously?  Ardila suggests one such new syndrome could be “acomputeria” – an inability to use computers.  He goes on to list potential specific subtypes of acomputeria.  I wonder if a better solution would be to classify problems based on their underlying causes, rather than inventing a new syndrome for each new modern task.  For example, one’s inability to use a computer may not be a syndrome in and of itself, but the result of memory problems (the individual is unable to learn new information), executive dysfunction (the individual may be unable to problem solve and continue to perform unsuccessful actions), or something else.  In this case, traditional neuropsychological measures may be effective in determining the underlying problem.  Either way, the issues addressed in the commentary are certainly worth thinking about further, and, most importantly, testing empirically.

Here is a link to the abstract: Ardila 2013

Ardila, A. (2013). A new neuropsychology for the XXI century. Archives of Clinical Neuropsychology, 28, 751-762.

Monday, November 25, 2013

Computerized Neurocognitive Testing in Sport Concussion Management: What Are the Problems?

In a recent post, I gave a brief overview of the use of computerized neurocognitive tests in the management of sports-related concussions.  I mentioned that while these tests are standard practice, they are far from perfect.  Now, I’ll expand on that statement.

As I discussed before, these tests are used to identify subtle problems that can’t be detected using standard neuroimaging techniques or medical exams.  The thinking is that if we can identify these impairments, we can decrease risk of subsequent (and potentially more serious) injuries.  However, factors relating to the tests we use, the athletes who we administer them to, and characteristics of the injuries themselves can severely impair these tests’ abilities to correctly identify individuals who are still suffering from the effects of concussions.

In terms of tests such as ImPACT, studies have generally found low-to-moderate test-retest reliability, suggesting that athletes tend not to perform consistently when they take the test multiple times (see Broglio, Ferrara, Macciocchi, Baumgartner, & Elliott, 2011; Elbin, Schatz, & Covassin, 2011).  This is a major problem, considering that the tests are designed to be administered to the same athlete multiple times.  One contributor to this issue is the presence of practice effects, whereby athletes improve their performance due simply to having taken the test before.  Another factor hampering reliability may be that the construct itself (i.e., impairment due to sports concussion) can’t be reliably measured due to its often subtle and transient nature.

The way in which the tests are used can also hurt their validity.  An advantage of these tests is their ability to be administered to groups of athletes, saving organizations time and money.  In fact, most baseline testing is done in groups.  On the other hand, post-injury testing is done on an individual basis.  Given evidence that group testing may result in poorer performance than individual testing (Moser, Schatz, Neidzwski, & Ott, 2011), comparisons of performance on baseline testing (group setting) to post-injury testing (individual setting) may not be accurate.

Athletes are generally expected to play through injury.  As a result, they may purposefully distort results by engaging in “sandbagging.” Sandbagging is when someone purposefully performs poorly on the baseline test, which resulting in a low score that is easier to equal on post-injury testing (Peyton Manning has admitted to doing this).  The computerized tests do have ways of identifying this; however, little research has been done to determine if they work.

Finally, it remains to be seen if the baseline testing model is effective in reducing the risk of either subsequent concussions or more serious long-term effects.  One study found that a symptom-free waiting period did not reduce the risk of sustaining another concussion (McCrea et al., 2009).

Although computerized neurocognitive tests are far from perfect, it is important to remember that they are just one tool in making return-to-play decisions after athletes suffer concussions.  Physical exams, self-reported symptoms, and balance testing are also used.  Furthermore, as objective measures go, these tests are the best we currently have.  Consequently, their use will continue to be widespread until something better comes along.

References

Broglio, S. P., Ferrara, M. S., Macciocchi, S. N., Baumgartner, T. A., & Elliott, R. (2007). Test-retest reliability of computerized concussion assessment programs. Journal of Athletic Training, 42, 509-514.

Elbin, R. J., Schatz, P., & Covassin, T. (2011). One-year test-retest reliability of the online version of ImPACT in high school athletes. The American Journal of Sports Medicine, 39, 2319-2324.

McCrea, M., Guskiewicz, K., Randolph, C., Barr, W. E., Hammeke, T. A., Marshall, S. W., & Kelly, J. P. (2009). Effects of a symptom-free waiting period on clinical outcome and risk of reinjury after sport-related concussion. Neurosurgery, 65, 876-883.


Moser, R. S., Schatz, P., Neidzwski, K., & Ott, S. D. (2011). Group versus individual administration affects baseline neurocognitive test performance. American Journal of Sports Medicine, 39, 2325-2330.

Monday, November 11, 2013

Free Neuropsychology Lectures

The website for the 4th  UK Paediatric Neuropsychology Symposium has uploaded a series of free lectures from last year's symposium.  These lectures include "Effects of Institutionalization on Brain Development and Behaviour" from Charles Nelson, a talk on acquired brain injury in childhood by Vicki Anderson, "Development of Executive Functions During Early Childhood and their Modulation by Genes and the Environment" from Adele Diamond, and a discussion of early symptomatic syndromes eliciting neurodevelopmental clinical examinations, from Christopher Gillberg.  This year's conference will take place May 19-23, 2014.  The theme is "Atypical Developmental Pathways."

The lectures can be found here.