Mr. Getty presents himself to a cardiologist. He believes that he’s been experiencing shortness of breath and reduced ability to exercise, unusual given that Mr. Getty is an athlete. At the cardiologist, Mr. Getty follows through the journey below:
At first, he was worried about his health, as he read that his symptoms were similar to that of heart failure patients on WebMD. But the cardiologist didn’t only use his symptoms, but also a battery of biomarker-based tests to determine the right diagnosis. Luckily, Mr. Getty is in the clear, as he’s only experiencing anxiety due to his upcoming exams.
Now imagine that Mr. Getty didn’t have to run through any tests, and the doctor prescribed him Lanoxin, a drug that increases the strength of heart muscle contractions, purely off his symptoms. You may be able to predict the consequences of the cardiologists' lack of due diligence; unfortunately, this is exactly what happens in psychiatry today.
As Bryan Johnson, the CEO of Kernel, explains here, we are truly in the “medieval times” of mental health.
Mr. Getty is back, but this time he comes to a psychiatrist after his general practitioner identified signs of Attention Deficit Hyperactivity Disorder(ADHD). At the psychiatrist. Mr. Getty follows through the journey below:
Using the Diagnostic and (not so statistical) Statistical Manual of Mental Health Disorders, known as the DSM-5, Mr. Getty’s psychiatrist checks whether he exhibits a minimum quota of symptoms that align with ADHD.
Next, the psychiatrist may continue forward with more comprehensive interviews that include anecdotal analysis of the patient's core symptoms, family history, and use of rating scales with Mr. Getty’s friends and family to pinpoint a specific disorder. The results of these examinations are consulted alongside the DSM-5.
Throughout this process, we find a dressing of subjectivity, without any dashes of real statistics and precision; what Mr. Getty is deprived of within this pipeline are biomarkers. In the cardiology example, we see that the standard of care entails the employment of biomarker investigations, from ECG to urinalysis. These are investigations that the current mental health care system lack.
This absence of clinical biomarkers within the diagnosis process and the bible of psychiatry, the DSM-5, is leading to more and more misdiagnoses. Here’s a quick TL;DR of the effects we are seeing in mental health because of these inaccuracies:
- Diagnosis has clearly become untethered from medical reality when one out of five boys nationally is diagnosed with ADHD and when antidepressant use has increased 400% in a decade, with nearly one-quarter of all women in their 40s and 50s taking antidepressants
- In a study with 250 patients, the accuracy of psychiatric diagnosis was the highest for cognitive disorders 60%, followed by depression 50% and anxiety disorders 46%. The lowest was the diagnosis of psychosis, with an accuracy of 0%. Essentially, in a world where 21 million Americans have faced at least one major depressive episode in their lifetime, more than every 1 in 3 people get the wrong diagnosis for the disorder.
- Olbert and colleagues (2014) report considerable heterogeneity within the criteria of individual diagnoses, showing that in the majority of diagnoses in both DSM-IV-TR and DSM-5 (64% and 58.3% respectively), two people could receive the same diagnosis without sharing any common symptoms.
- Insurance firms are able to leverage DSM-5’s reliability for their diagnosis in the process of providing coverage, and inaccuracies in this process are causing a situation where 42% of people struggle to cover high costs related to mental health.
Returning to Mr. Getty and the ADHD example, his experience is only one of almost a million children in the States who are misdiagnosed with this disorder. This epidemic of misdiagnosis is causing millions of dollars in expenses downstream, where almost $320 million-$500 million a year is wasted on unnecessary medication for ADHD.
It’s clear that psychiatry needs a revamp with a focus on precision, but how do we do that without completely disrupting the standard of care today? To answer that, we’ll have first to dissect what is currently being done, from diagnosis to treatment planning.
We’ve already covered the general pipeline for diagnosis, where the bare minimum is using descriptions of symptoms from the patient to classify a disorder given DSM-5 criteria. Rarely is that psychiatrists stop with DSM, though the rest of the diagnosis process relies on the scoping of the manual.
Depending on the disorder of suspicion, diagnosis can take multiple different paths. For example, if anxiety or depressive disorders are speculated, thyroid function may be investigated as thyroid disease can affect your mood. Likewise, physical evaluations may also be made for more information regarding the disorder at hand. Generally, these physical evaluations may address the side effects of a patient's prior prescriptions. These evaluations are as close as you’re going to get with biomarker usage in mental health.
After an initial psych evaluation, a more in-depth analysis will take place to identify the principal, differential, and comorbidity diagnoses.
Psychiatrists will assess all the data from the initial assessment to identify the principal diagnosis. This is the primary reason for presentation and the main focus of treatment and attention. Going back to Mr. Getty, the principal diagnosis would have been ADHD.
Differential diagnosis is used to identify all possible conditions given a set of symptoms akin to the process of elimination.
Finally, once a diagnosis has been chosen out of the elimination process, comorbidities are addressed. Patients can also present with additional diagnoses that are heterogeneous/coexistent to each other, these non-principal conditions are called comorbidities.
Through these 3 steps (principal, differential, comorbidity), these are the main tenets that are analyzed:
- Past and current psychiatric diagnoses
- Past psychiatric treatments (type, duration, and, where applicable, doses)
- Adherence to past and current pharmacological and nonpharmacological psychiatric treatments
- Response to past psychiatric treatments
- History of psychiatric hospitalization and emergency department visits for psychiatric issues
- evaluations are made of appearance and behaviour, self-reported symptoms, mental health history, and current life circumstances
A mental health treatment plan is simply a set of written instructions and records relating to the treatment of an ailment or illness. A treatment plan will include the patient or client’s personal information, the diagnosis, a general outline of the treatment prescribed, and space to measure outcomes as the client progresses through treatment.
Think of it as a playbook for a patient's treatment.
Sometimes this treatment contract is written down explicitly, but more often it is discussed between the individual seeking therapy and the therapist
Intuitively, the treatment planning doesn’t stray from the roots of DSM, as the prescription is a function of diagnosis, hence if the diagnosis sucks, the treatment also sucks.
Current treatment planning standards try to compensate for the lack of information from DSM by taking a trial-and-error approach.
Doctors first review clinical records to see if evidence exists for recommending one medicine over another. They also consider family history and side effects when prescribing medication.
After an initial prescription, it may take several weeks to months before you see improvements. Now, imagine waiting months for a medication to work, just to find out that you were never responsive to it.
This lengthy iterative approach doesn’t benefit the practitioner or the patient.
Why the DSM method sucks
You can imagine that symptoms are not always unique to one diagnosis, which bring our first flaw of DSM, the heterogeneity within the criteria of individual diagnoses.
In the DSM-5 there are 270 million combinations of symptoms that would meet the criteria for both PTSD and major depressive disorder, and when five other commonly made diagnoses are seen alongside these two, this figure rises to one quintillion symptom combinations — more than the number of stars in the Milky Way. With these many ways to classify a single disorder, you can see why heterogeneity would arise.
This issue of heterogeneity also manifests within treatment planning, as it is dependent on the DSM-5 classifications.
Even in identical diagnoses in similar individuals, differences are bound to manifest in any or all of the following components:
- History and Demographics — client’s psychosocial history, history of the symptoms, any past treatment information
- Assessment/Diagnosis — the therapist or clinician’s diagnosis of the client’s mental health issues, and any past diagnoses will also be noted
- Presenting Concerns — the problems or symptoms that initially brought the client in
- Treatment Contract — the contract between the therapist and client that summarizes the goals of treatment
- Responsibility — a section on who is responsible for which components of treatment
- Strengths — the strengths and resources the client brings to treatment (can include family support, character strengths, material support, etc.)
- Treatment Goals — the “building blocks” of the plan, which should be specific, realistic, customized for the client, and measurable
- Objectives — goals are the larger, more broad outcomes the therapist and client are working for, while multiple objectives make up each goal; they are small, achievable steps that make up a goal
- Modality, Frequency, and Targets — different modalities are often applied to different goals, requiring a plan that pairs modalities, a frequency of sessions, anticipated completion date, etc., with the respective goal
- Interventions — the techniques, exercises, interventions, etc., that will be applied in order to work toward each goal
- Progress/Outcomes — a good treatment plan must include space for tracking progress towards objectives and goals (Hansen, 1996)
In other words, the way mental illness has been conceptualized — Major Depressive Disorder (MDD), Generalised Anxiety Disorder, Schizophrenia, and so forth — is problematic because it groups dissimilar symptom profiles together. Such heterogeneous diagnoses lack treatment specificity, a clear clinical presentation, and precise diagnostic boundaries, and have high comorbidity rates and very low inter-rater reliability.
So how does this heterogeneity issue affect patients and clinicians? For starters:
Building new boundary lines for diagnosis
Clearly, there is a need for a more objective diagnosis process, that also considers individual differences. Simply using a patient’s subjective symptoms and other historical reports does not account for the heterogeneity that DSM poses, and possibly DSM is intrinsically flawed. Now, we look towards other medical fields to bridge the gap.
What’s the one thing that cardiology has that psychiatry doesn’t? Biomarkers.
But what exactly are biomarkers?
FDA defines biomarkers as “A defining characteristic that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
In general, biomarkers are any measurable indicator of some physiological state. This state is determined by what type of biomarker is used.
The term biomarker can be further distilled into multiple subsets, with the most important being :
- Prognostic: Used for predicting patients with differing risks of an overall outcome of disease, regardless of treatment.
- Predictive: Used for predicting the likelihood of the patient's responses to a particular treatment.
- Diagnostic: Used to detect or confirm the presence of a disease or condition of interest or to identify individuals with a subtype of the disease.
- Risk: associated with an increased, or in some cases, decreased chance of developing a disease or medical condition in an individual who, from a clinical standpoint, does not yet have that disease or medical condition.
It may seem obvious that from the categories of biomarkers above, diagnostic markers will be the most useful for diagnosis, but in fact, it will be a combination of the biomarkers above that will give the best recovery path for a patient.
For example, cardiologists may measure the troponin-T to diagnose heart attacks and then use hs-CRP tests to decide the probability of recurring heart attacks. These proteins can easily be assessed using standard blood tests. By measuring these biomarkers, patients are able to get more clarity on their abnormalities and better prepare for the future.
Biomarkers in the Brain
So we know that there are biomarkers in our blood, but how would we measure them in our brains? Clearly, we can’t slice open our heads whenever we needed to diagnose mental health. Fortunately, we can read the signature of our brains non-invasively, through a technique called electroencephalography (EEG).
In short, EEG non-invasively records electrical activity in the brain via a set of scalp electrodes. This electrical activity is correlated with neural activation in large cortical areas in your brain with millisecond precision. We can use the voltage readings from EEG to identify specific activity, that may be correlated to mental disorders, acting as biomarkers for brain health (neuromarkers).
So how exactly are these squiggly lines related to depression, anxiety, schizophrenia, etc.? To answer this question, we need to dive into the realm of QEEG and ERP.
Quantitative EEG (QEEG)
3 years after the first human EEG was recorded, Hans Berger, who invented EEG, conducted the first quantitative EEG study in 1932. By applying a technique called Fourier transform to spectrally analyze electrical brain activity. His intention? Dr. Berger wanted to make his invention quantifiable and objective, alluding to the future diagnostic and prognostic uses; he did just that.
Fundamentally, the EEG signal can be broken down into electrical energies, or power bands. Each of these bands reflect a certain range of frequencies present in your brain waves. These frequencies can further inform us of specific neuropsychological and neurophysiological patterns.
At a high level, here are what each of the major frequency ranges correlate to:
- Alpha: Conscious relaxation
- Beta: Conscious focus, memory, problem-solving
- Theta: Creativity, emotional connection, intuition, relaxation
- Delta: Immune system, natural healing, restorative / deep sleep
- Gamma: Binding senses, cognition, information processing, learning, perception, REM sleep
We can take this frequency-to-behaviour relationship to the next level by bringing location into the play. We not only can understand general trends about our brain waves, but we can also use spatial information to derive even more specific relationships. By applying both spectral analysis (breaking down to frequency) and spatial analysis (localizing the source of EEG), we reach the true power of QEEG.
Combining QEEG data with our current knowledge of neurophysiology, we can reach conclusions on what specific abnormalities may be arising in a patient and due to which connectivity networks/sections of the brain. We can compare a patient's QEEG against a database of other recordings which are considered “normal”, which is called a normative database, we can get quantitative measures of where, when, and how a patient strays from the norm.
The figure above showcases how we can use QEEG to generate 2D (Topographic Mapping) and 3D maps (Low-Resolution Brain Electromagnetic Tomography) of the brain that highlights what regions of the brain are exhibiting abnormalities across various frequency ranges.
Based on these plots, psychiatrists can derive more precise and statistically educated predictions on what treatments may be the best fit for a patient, and what the primary diagnosis should be. QEEG, therefore, fits in the box of being a multipurpose neuromarker.
One specific application that QEEG is regularly implemented is during neurofeedback. Neurofeedback is a technique that aims to “reset”, or bring to homeostasis, a patient's brain activity using external feedback that informs a patient when they are doing better or worse. Our brains naturally strengthen or weaken neural pathways based on this feedback, through a process called operant conditioning.
Neurofeedback trains specific frequencies and areas of the brain to return to homeostasis. A patient's EEG is recorded, and in real-time, feedback, such as sounds or pictures on a screen, is provided to the user based on if the target frequencies have reached a certain threshold. Using this training, a patient's brain is able to return to the norm, aiding and diminishing certain mental disorders.
But how do psychiatrists know which parts of the brain, and which frequencies need to be “reset”? Look no further than QEEG! By using QEEG brain maps, psychiatrists can fine-tune their neurofeedback protocols to the frequencies and areas of the brain that a patient specifically strays from. QEEG is simply a comparison of EEG data from the norm, perfect for neurofeedback training.
This is only one example of where QEEG can fine-tune therapies and treatments.
Event-Related Potentials (ERP)
4 years after Berger discovered QEEG, Hallowell and Pauline Davis made the first recordings of the human auditory event-related potentials, identifying one of the first correlates of perception in EEG.
Here’s an awesome thread summarizing their early findings:
Event-related potentials (ERP) sound like a complex Ph.D. neuroscience topic, but it’s actually simpler than QEEG. Based on the physiological fact that our brain responds and functions differently when our physical state changes (long way of saying human sensory perception), ERPs are essentially the EEG representation of these state changes.
In the lab, we can intentionally cause these state changes to occur using external stimuli (visual, audio, tactile, etc.), which would be presented to a patient at an unpredictable frequency. We can then measure a patient's EEG as they are being stimulated, average the time segments when they were being stimulated, and then view the ERP response. Below is a common ERP response, each spike and deep representing individual components that inform us of different cognitive processes.
The layman’s explanation: When you sense something unusual, you can identify unique spikes and dips in your brain activity, and these anomalies in your brainwave tell us about how your brain registered the event.
Here’s what each of those spikes and dips represent:
- P1: A test of sensory gating, which is crucial to an individual’s ability to selectively attend to salient stimuli and ignore redundant, repetitive or trivial information, protecting the brain from information overflow. The most positive peak between 40 and 75 msec after the conditioning stimulus is the P1.
- N1: A negative deflection peaking between 90 and 200 msec after the onset of stimulus, is observed when an unexpected stimulus is presented. It is an orienting response or a “matching process,” that is, whenever a stimulus is presented, it is matched with previously experienced stimuli. It has maximum amplitude over Cz and is therefore also called “vertex potential.” N1 is closely associated with attention.
- P2: Refers to the positive deflection peaking around 100–250 msec after the stimulus. Current evidence suggests that the N1/P2 component may reflect the sensation-seeking behavior of an individual.
- N2: Includes 3 separate components at the 200ms range which can represent the brain’s automatic process involved in the encoding of the stimulus difference or change
- P3: A fan favourite 300ms after the stimuli, it can gauge increases in attention, and also pose as an indicator of the broad neurobiological vulnerability that underlies disorders within the externalizing spectrum (more about that in a second)
A trend that you may have noticed above is that the ERP components closer to the stimulus onset (start of the external event) were more closely tied to the type of the stimulus, and the ones after 200ms are more related to the cognition of the stimulus. Likewise, ERP components are divided into 2 categories: sensory/exogenous and cognitive/endogenous.
Because cognitive ERPs are linked to information processing networks, the amplitude and latency characteristics of these components can have similarities across groups of psychiatric patients.
For example, one of the most robust neurophysiological findings in schizophrenia is a decrease in P3 amplitude. P3 is often smaller in amplitude and longer in latency in patients who have been ill longer. Another example includes panic disorders, where an enlarged frontal P3a to a distractor stimulus among patients has been reported using a three-tone discrimination task, supporting the hypothesis of dysfunctional prefrontal-limbic pathways.
These explorations have shown the value of ERP components as brain multipurpose biomarkers. Just like QEEG, it has diagnostic, prognostic, and predictive capabilities.
Putting neuromarkers into practice; what are the blockers?
Even if clinics take the initiative to implement one of these techniques into their practice, they have the capability to diminish the problem of heterogeneity created by the DSM. So what’s stopping us?
QEEG and ERP are in severe need of standardization
Truth be told, event-related potentials are extremely delicate, with a lot of moving parts. Simply changing the time between stimulus onsets or the environment that a paradigm is being recorded within can cause significant variability across sources of comparison.
This means that many correlates discussed in literature or identified in normative databases may not be of use because they used different trial variables. This calls for the cognitive neuroscience community to come to a consensus on a standardized protocol for ERP analysis (ERP Core is an example of an initiative promoting best practices).
EEG Neuromarkers Demographic Differences
Outside of experiment/equipment variables, the types and nature of biomarkers may change over the course of the condition. Additionally, age and gender can affect the types and presence of specific neuromarkers.
This variability diminishes the value of neuro markers, as studies are forced to take global averages to show the statistical significance of a neural correlate between niche groups and controls.
Skill and Knowledge Prerequisites
Psychiatrists generally do not have the skills necessary to operate EEG hardware and analyze EEG data. QEEG can be problematic, particularly in the hands of untrained operators. The statistical results can be influenced by wrong electrode placement, artifact contamination, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of epoch
This either requires psychiatrists to overcome major knowledge gaps through extensive training or resort to outsourcing recording and analysis works to 3rd party specialists, which raises the cost of adding neuromarkers into practice.
We need more reliable normative databases
Databaseless QEEG and ERP analysis are possible but require a significant amount of experience (think Jay Gunkelman level). It’s imperative that we have reliable databases for the mass adoption of QEEG and ERP. Problems current databases are facing include:
- Broad recruitment criteria
- higher artifact contamination in clinical patients than in typical subjects
- the potential influence of medication
These issues related to current QEEG/ERP databases are creating a negative image surrounding these techniques, as poor data collection can cause heterogeneity to arise and we would be back at square one.
High sensitivity, Low specificity
Certain correlates may be a biomarker for multiple disorders and treatment pathways. Clinical sensitivity has been hampered by the fact that their parameters (e.g., amplitude and latency) are diagnostically unspecific.
In other words, ERP deficits are a common feature of several psychiatric afflictions, but they will not assist clinicians in deciding whether a given patient is depressed, paranoid, or an alcoholic.
Old Practices are Comfortable
The adoption of ERP has also suffered from the predominance in psychiatry of the official nosological systems, such as the DSM.
A main and crucial point that makes this categorical approach still dominant in psychiatry is that it greatly facilitates clinical communication among mental health practitioners, as all textbooks and practice guidelines have been developed based on these categories. For a similar ecosystem to form around EEG diagnostics, we need easier methods for psychiatrists to enter the space and experiment in practice in the first place.
Additionally, cognition is still not considered a primary treatment target, being still envisaged as a particular category of symptoms. ERPs are correlated with cognition, and currently, psychiatrists optimize for minimizing symptoms and side effects (cognitive effects) using drugs, ignoring the fact that cognitive deficits can be a cause.
Journey towards adoption
These major barriers must be tackled before neuromarkers can effectively be introduced and adopted at scale in psychiatry. The potential for neuromarkers is massive, but the gap that must be filled first is just as massive. The issue of the reliability of these markers has been one of the most difficult to tackle for many decades.
Regardless of these major issues, many trailblazers in the industry are making strides in the space of neuro markers in their own clinics. Here’s a fantastic series of articles that cover psychiatrists who are experimenting with this technology, and the results they drove:
Brain hacking: The Mind's Biology
Marris Szeliga, a patient and employee of psychiatrist Hasan Asif, is fitted with an EEG cap that will allow him to…
Brain hacking: Hot-wired for happiness?
At MIT, where he directs the Center for Neural Circuit Genetics, Tonegawa was ready. Because he had trained as a…
Brain Hacking: Beyond the catchphrase
"People use OCD as a catchphase: 'That's so OCD.' But it's very different from that. I know how much my son suffers."…
It should also be explicitly addressed that neuromarkers DO NOT need to replace current diagnostic manuals. Even if psychiatrists use neuromarkers alongside the DSM-5, we will see substantial improvements in terms of personalization of mental health.
In my next article, I’ll cover my favourite neuromarker solutions and how they aim to level the playing field between psychiatry and areas like cardiology. I’ll also introduce a special project that I’ve been working on that aims to solve the neuromarker adoption problem from its roots. See you there!