The Revolution Will Be Lobotomized

Written in March 2023

The DSM, long considered a paragon of psychiatric manuals, now finds itself on increasingly unstable ground

Written by ZACHARY HAYES

On an otherwise routinely panicked morning in the early spring of 2015, I sat in a small, sterile office tucked into the corner of the behavioral health inpatient ward of Charlotte Presbyterian and waited for my board-certified physician to bring me what so many therapists and dermatologists and x-ray technicians and general practitioners before him had neglected to provide me with: one final, morbidly satisfying diagnosis. An end to the endless uncertainty. In the months prior to my brief, voluntary admission to the ward, I had subjected myself to a barrage of physical and psychological prodding at the hands of the increasingly exasperated medical community in the Charlotte-Mecklenburg area, hoping to confirm what I, in my rapidly declining mental health, had determined to be a death sentence.

A recent medical scare had introduced me to WebMD and a particularly insidious brand of diagnostic doom-scrolling that persisted well after I’d been given the all-clear. Leaning against a plate glass door one afternoon, I noticed something off: my back was uneven, severely swollen on the left side. I took to the bathroom mirror and lifted my shirt, twisting my head around to discover a large mass on my back that — according to the family practitioner I’d see the next day — covered the entire area directly over my left kidney. A series of scans and tests would follow, but it took time to get an appointment, leaving me to search for answers on my own in the meantime.

By the time I was injected with contrast material and glided into the maw of the CAT, I was wrestling with the dismal 5-year survival rates for the advanced kidney cancer I was sure it would reveal. A week later, I was spared by the radiologist: just a fatty lipoma, non-malignant, and with it, for the first time in weeks, permission to breathe. The specialist I would follow up with, however, begged to differ. That lump? Scoliosis, he said, just some bunched-up muscle. But surely a tumor — malignant or not — couldn’t be so easily dismissed by a brief look at my back. What exactly had the radiologist seen on the scans? Most importantly, why was nobody else questioning this? As my faith in professional medical opinions began to dwindle, it was clear that something was wrong, cracked deep inside me, and there was no moving past it now.

I had been tossed into uncharted waters and couldn’t seem to find the shore, convinced that something predatory lurked beneath the surface. I began to see signs of it everywhere I looked: a “swollen” lymph node here, a sinister asymmetry there, does the left side of my cheek seem a little fuller to you? While the physical diagnoses I sought seemed impossible to obtain, diminished — but never fully assuaged — by examination (self and otherwise) and biopsy, I picked up a hearty variety of mental health diagnoses with relative ease along the way. Generalized Anxiety Disorder, Hypochondria, Body Dysmorphia. Puzzle pieces, they seemed, informative on their own but never revealing the full picture.

As it turns out, this is not an uncommon experience. There are no empirical tests to accurately diagnose mental illness, no biopsies or scans that will reveal, without question, the distinct affliction an individual is experiencing. Instead, diagnoses are derived from either a rigorous series of mental status evaluations, psychological tests, and clinician-rated surveys, or — more commonly, given time and budgetary constraints — a relatively brief examination centered around the patient’s symptoms. The evaluating physician will then make the final judgment based on their interpretation of these symptoms in line with one of the ascendant diagnostic manuals, most commonly the Diagnostic and Statistical Manual of Mental Disorders — or DSM-5 — in the United States.

The DSM holds a unique power over the modern Western understanding of mental health. This influence goes beyond the diagnoses patients receive from their doctors to shape the very way we as a society understand and talk about our minds. The films and TV we consume are brimming with references to DSM disorders, interrupted by ads touting the latest pharmaceutical remedies to Major Depressive and Bipolar Disorders. Celebrities are applauded for revealing their struggles with PTSD, and Presidents are diagnosed with Narcissistic Personality Disorder by armchair physicians. And yet, despite its ubiquitous status as the “bible” of psychiatry — a position that indicates a certain authoritative credibility on such matters — the DSM and its model of defining mental illnesses as discrete, medical entities have been increasingly plagued by controversies questioning their validity.

There is the issue of comorbidity, or the simultaneous diagnosis of two or more conditions in a patient, for example. The psychiatric field experiences a questionably high rate of this cooccurrence; in fact, according to a 2014 study by Caspi et al. published in Clinical Psychological Science, “Half of individuals who meet diagnostic criteria for one disorder meet diagnostic criteria for a second disorder at the same time, half of individuals with two disorders meet criteria for a third disorder, and so forth.” Anecdotal as it may be, my growing laundry list of diagnoses is evidence of this complaint, and it stems from the very foundation of the DSM model. Essentially, the lists of symptoms defining these individual disorders overlap with such regularity that the closer one looks, the more arbitrary the divisions seem. The lines between misdiagnosis and something more fundamentally flawed too easily blur.

But on that early spring morning in Charlotte Presbyterian, there was a breakthrough in the case. I finally received that sweet, holistic diagnosis I had gone to so much trouble to obtain. To my surprise, it was not the malignant neoplasm of the submandibular gland I had spent the last few weeks vigorously prodding the floor of my mouth for several hours a day, every day in search of — rare, but statistically possible, for my age group — but rather Obsessive Compulsive Disorder, or OCD. I had scoured the CDC death charts and subjected myself to a cyclical battery of case studies and prognosis reports in preparation for the doctor’s word, and when it finally came, it occurred to me that I had overlooked something crucial. Even before I was discharged, I started putting the pieces together and, once I returned home to research the matter further, I began to see all of my experiences over those last few months in a new light. Repetitive, intrusive thoughts, insatiable compulsions, intense emotional distress, it was all there, bundled up in a neat little package that I could present to a psychiatrist and say, “Here I am. Treat me.”

At first, I found great relief in researching my new diagnosis; I joined OCD forums where I could commiserate with my fellow obsessives; I dredged the internet for statistical data, causative theories, and treatment plans; I was determined to understand the shape of my disorder, to differentiate between the distorted reality it presented me and what could be chalked up to a normal level of concern over my health. “Normal,” it turns out, is a rather difficult standard to lock down. Arbitrary even, as critics of the DSM-5 point out. While I was working to determine the baseline standard for myself, psychiatric researchers have been raising concerns over the manual’s delineation between normal and disorder for years.

This is no doubt a subjective, controversial question, but I assumed that those responsible for establishing these lines took great care to be as empirical as possible in making these decisions, perhaps in a way I could replicate for myself on an individual level. As I took to researching their methodologies, a different reality emerged. Rather than the frigidly logical data analysis I had imagined, DSM diagnoses are decided on an authoritative, top-down approach, meaning that the criteria are determined by the consensus of the industry professionals, researchers, and experts that make up the DSM task force and work groups, and when there is no consensus — a startlingly common occurrence — the matter is put to a vote. In the most recent revision, this process was marred by so much inconsistency and disagreement that several workgroup members resigned in disgust, highlighting the political nature of these debates. But even so, aren’t these the voices most qualified to speak on the matter?

Take a look at the DSM’s credentials: Published by the American Psychiatric Association, the largest psychiatric organization in the world, the DSM is now in its fifth edition after a 14-year revision process undertaken by a central cohort of over 160 industry leaders in psychiatry, psychology, neurology, and epidemiology representing more than 90 psychiatric and academic institutions across 16 countries. With such a reputable foundation building on the research contributions of hundreds of outside advisors and the collected works of the numerous joint conferences sponsored by the APA, the World Health Organization, and the National Institutes of Health, it’s easy to look at all this as evidence of the manual’s supreme credibility. But these wide-ranging inputs inject conflicting interests, theories, and politics into the process, and when a topic as subjective and controversial as normality is up for debate, logic rarely wins out.

As I grappled with the question of normality, I took notice of yet another peculiarity in my diagnosis. The symptoms I was experiencing were, at first glance, perfectly explained by the diagnosis of OCD, but as I looked deeper into the variety of symptoms that defined the disorder, the shakier it seemed. There were many characteristic symptoms that did not apply to me, such as violent and aggressive intrusive thoughts, or the obsessive need for cleanliness, or, ironically, hoarding. It’s not unheard of for a disease to manifest in different ways for different people, but with such a broad, sometimes contradictory range of symptoms, it raises the question of how stable this disorder actually is as a distinct category.

How useful is a label if it is so broad as to essentially be meaningless? According to an analysis of the DSM-5, there are approximately 636,120 different combinations of symptoms that meet the diagnostic criteria for Posttraumatic Stress Disorder. With this in mind, if someone were to offer up one of these diagnoses to a physician, would they really understand what it is they are treating without first dissecting that patient’s specific catalog of symptoms? As the flaws of the DSM model come to light — the arbitrary lines between disorders, between illness and normality, and the broad, overgeneralized criteria for the disorders themselves — we must consider the possibility that the lens through which we currently understand mental health might be fundamentally distorted. But in order to understand how we arrived at this junction, we must consider the context within which the DSM-III rose to power and revolutionized the psychiatric world. It is from here, through the political and financial forces that shaped its creation, that the manual’s flaws arise.

Prior to a tumultuous decade preceding the DSM-III’s release in 1980, the early 20th century was a period of relatively broad acceptance and prestige of the dominant school of psychiatric thought at the time: Freudian Dynamic Psychology. Rather than strictly basing a patient’s diagnosis on their symptoms, this model also considered an individual’s behavior, environment, and personal history. The issues the individual was facing and their etiology — or underlying cause — were determined by psychoanalysis, where a psychiatrist would go through the life of the individual in search of complex theories explaining their experiences. The treatment here was largely psychotherapy, or talk therapy, and diagnostic categories played a very limited role. Instead, symptoms were viewed as either reflections of broad, underlying psychological causes or reactions to life problems.

Here, the differences with modern psychiatry run deep. First, today’s DSM model is descriptive, meaning it only looks at the symptoms without consideration for underlying causes. In contrast, the focus of psychiatry at this time was on deciphering those very root causes, but since there was no regulation or standardization in the field, causative theories varied from institute to institute, office to office. Freud was in vogue, but the work of Jung, Adler, Kraepelin, and a slew of contemporaries found their adherents, and all of these theories were open to individual interpretation by a growing body of mental health clinicians. Dynamic psychiatry also largely rejected the traditional lines between normal and mentally ill, opting instead for a dimensional perspective where nearly everyone existed on the continuum to varying degrees. There was a problem inherent in this as well: how to determine who needs treatment. But by the 1960s, psychiatry had greater concerns to attend to.

For starters, a notable antipsychiatry movement began to rise in popularity, claiming mental illness was a myth used to control social deviance and nonconformity. These were not marginal eccentrics either, but rather major intellectual figures, embraced by academia and a growing counterculture. Psychiatry’s financial legs grew increasingly unstable as well. Where psychiatry had often been an out-of-pocket expense for much of the early 20th century, third-party insurers increasingly began to foot the bill in the 60s and 70s. In fact, federal employee health coverage reimbursed mental health treatment dollar for dollar with medical illnesses in the 1960s.

This became a major source of pressure for several reasons. For one, empirical studies arose that questioned the effectiveness of psychotherapy (it was helpful for minor, personal struggles but largely ineffective for serious mental illness). This led insurance agencies and the government to view the practice in a poor cost-benefit light. Unlike the specificity afforded by medical diagnoses and the verifiable causes of physical illnesses, the vague definitions and theoretical hodge-podge of mental health at the timefit poorly into insurance models of reimbursement, points cited by these funding bodies. As a result, these institutions began pushing for discrete diagnoses and effective, standardized treatments as they rolled back coverage in the 1970s.

While all of this was going on, the cumulative effects of two decades of deinstitutionalization were beginning to take effect. Prior to the ‘60s, those with severe mental illness were quarantined from society for long periods of time — often years to decades, if they were ever released — in appallingly maintained state-run mental asylums which had become infamous for their rampant abuse and neglect (psychiatrists had largely abandoned these institutions by the middle of the century).

The introduction of the anti-psychotic drug chlorpromazine in 1952 was instrumental in kicking off deinstitutionalization; Its ability to stifle the agitation of severe psychosis allowed for a manageable reintroduction to society for many patients. The cause also found a powerful ally in President John F. Kennedy, whose sister Rosemary underwent a prefrontal lobotomy at 23 that left her critically disabled and confined to one such institute for the rest of her life.

In his final legislative act as President, Kennedy signed the Community Mental Health Act in 1963. The legislation was considered a landmark in mental health legislation, an ideological icon of what the federal government could accomplish, aiming to tackle the inadequacies of mental healthcare by building and funding mental health centers built around the concept of community support. But the plan lacked structure, leading to an inadequate — and incomplete — realization of its lofty goals.

As I consider this alternate, idealist reality, I look back on my time in a mental health facility and wonder what could have been. I was in crisis, a moment of acute panic in need of acute treatment, and where else do you go for such an emergency but the hospital? I was stripped, checked for distinguishing marks, and placed in the low-security wing of the behavioral health unit with a gown and a handful of other patients biding their time until they too could speak with someone they hoped would help. In the meantime, there were a few group activities — mostly voluntary roundtable chats tinged with the voyeuristic intimacy of a Dear Abby column — but treatment here largely consisted of time to sit and think about how you ended up there.

I took to wandering the space and quickly discovered a pair of doors separating us from the high-security wing which housed the patients suffering from debilitating psychosis or volatile aggression. One morning, through a small, wired glass window in the door, I saw a mousy, middle-aged woman shuffle into the hallway. Dazed, she approached the wall opposite her room and raised a frail, hesitant hand as you might before stepping through the opening left behind by a pristine sliding glass door. Just as soon, a nurse rounded the corner and nudged her back into her room, closing the door behind her before disappearing back down the hall. After that, I sat and thought not about my troubles but about this woman. How long had she been here? What would become of her? Anecdotal as it is, I felt as though I had witnessed a failure in the system — if not right then, somewhere along her way there. In the absence of structured support, we’d merely been put on hold as diagnoses were handed out and discharge papers were drafted.

Kennedy’s faltering idealism is not solely to blame for our current situation, though; there were secondhand attempts to inject new life into this system. In the month prior to the 1980 election, President Jimmy Carter signed the Mental Health Systems Act, providing further grant funding to the cause, but with the impending election of Ronald Reagan, it was clear that the political pendulum had begun to swing in the other direction. In the summer following his inauguration, Reagan signed the Omnibus Budget Reconciliation Act, repealing nearly all of the Community Mental Health and Mental Health Systems Acts and cutting state grants for mental health services and substance abuse by 75 to 80 percent, all but eliminating the United States’ single attempt at a unified mental healthcare system under the pretense that federal intervention was the cause of most of the country’s ills.

All the while, the number of institutionalized individuals was plummeting; by 1980, the number had dropped 75% from its peak in 1955. With this arose a new demographic of young people suffering from severe mental illness who didn’t qualify for Medicaid, leaving them to drift in and out of psychiatric wards and the criminal justice system (the top 3 largest psychiatric facilities in the United States today are prisons). While this might seem a boon to psychiatrists at the time, the practice had largely shifted between 1900 and 1970 from one focused on the treatment of insanity to one focused on the life problems and dissatisfactions of wealthy, young, otherwise healthy individuals — the “worried well” — leaving them either ill-prepared for or having little illness in treating those with severe illnesses or substance abuse issues. After all, talk therapy was largely ineffective with this demographic. It was around this time that the prescription of medications to treat all manner of mental issues began to take off. By the late, 1970’s the non-prescribing psychiatrist was the exception.

This shift appears magnified to an even greater degree today. When I myself was placed in the care of a psychiatrist after being discharged from the hospital, I was excited, anxious to build a relationship with this professional who could help me make sense of what I was experiencing. To my disappointment, my first visit featured only a brief acknowledgment of my presence, a prescription for an antidepressant, and instructions to return in a few weeks as my hour was cut short. I don’t even remember us having a proper introduction. Sometime later, a friend in the field would tell me they hadn’t “seen” a patient in nearly 15 years, a far cry from the intimate practice of the psychoanalysts. As it turns out, psychiatry’s relationship with talk therapy began to slip some time ago.

As psychiatrists began transitioning to outpatient care, the postwar period saw a skyrocketing demand for mental health treatment that outstripped demand. While the number of psychiatrists grew by nearly 600% between 1947 and 1976, the number of other mental health clinicians including psychologists, social workers, family counselors, and primary care physicians grew at even higher rates. While psychiatrists retained the exclusive right to prescribe medication or conduct medical treatments, these clinicians were able to offer equally effective talk therapy at a fraction of the cost, encroaching on psychiatry’s financial and existential territory. By the time courts began to uphold arguments that medical training was irrelevant to the practice of talk therapy, psychiatrists’ claims to the practice were all but dissolved.

Finally, there were also calls from the small, research-oriented wing of psychiatry to bring the practice into the realm of empirical scientific inquiry. They saw the need for a more reliable diagnostic system to foster better research and peer review. They also argued that treatments should be determined by the same kinds of quantitative, comparative studies used to test drugs: those based on samples of uniformly diagnosed and treated patients, an impossibility in such an unregulated field. These calls came to a head in 1962 when Congress passed the Kefauver-Harris drug amendments following the public outcry over the thalidomide controversy. With these amendments, the FDA began to mandate stricter standards for drug trials. Later regulations would require psychotropic drugs to be marketed towards specific illnesses rather than the everyday ills pharmaceutical ads were targeting at the time.

And then, just as psychiatry’s existential crisis was coming to a head, the APA handed Robert J. Spitzer the keys to the kingdom. Spitzer, a leading psychiatrist at Columbia University, was appointed in 1974 to head the production of the DSM-III where he would seal his fate as perhaps the most influential figure in modern psychiatry. Prior to this third iteration, the DSM was a small, etiologically-based manual with comparatively little clout in the psychiatric field — and virtually no visibility in the broader culture — but Spitzer and his team were acutely aware of psychiatry’s crumbling raison d’etre.

A controversy regarding the DSM-II kicked off some of the ideation. This edition had considered homosexuality as a mental illness, calling into question the criteria for defining DSM disorders.

After some debate, it was removed in a single, quick vote, publicly illustrating how these decisions were made not on the basis of science, but due to social and political pressures.

Spitzer saw this as proof of the need to move towards a symptom-based model in order to be perceived as objective. To this end, Spitzer selected task force members focused on psychiatric research rather than clinical practice and abandoned the Freudian perspective in favor of the theories of late 19th century German psychiatrist Emil Kraepelin, whose approach can be summarized by three ideas: that mental disorders should be seen as analogous to physical diseases, that classification should be based on symptoms rather than unproven causal theories, and the idea that research will eventually reveal organic etiologies for these disorders.

Official APA statements would maintain that mental disorders were viewed as behavioral or psychological in nature, but the view of mental disorders as medical diseases characterized Spitzer’s entire approach. Through this biological lens, the DSM-III represented a landmark shift in the view and treatment of mental illness, focusing on brain chemistry and medication over the predominant psychosocial vision, symptoms over causes, medication over therapy, categories over dimensions.

Field trials of the draft DSM-III were financed and legitimized by the National Institute of Mental Health with mildly encouraging results on consistency between diagnoses, a rallying cry among supporters of the new system in comparison to the abysmal rates of reliability in the DSM-II. But Spitzer and his team placed nearly all of the weight of the success of the new model on this idea of reliability, downplaying the fact that a system reproducing consistent results does not mean it is inherently valid. A scale that is off by a few pounds, for example, will consistently give you the same invalid results. The body of data they were working with to produce this supposedly data-driven model was also relatively small and inconsistent with widely divergent results that hardly constituted consensus on nearly any of the diagnoses that made their way into the final manual. Gerald Klerman, the leading psychiatrist in the federal government at the time, would later acknowledge that the manual arose as a result of deep political pressures facing the discipline rather than the rigors of academic and clinical debate.

After coming to this point, I began to feel disillusioned with the whole mental health world as I knew it. I had set out in hopes of understanding my experience, devouring research papers, seeking professional help, even discussing it with close friends, and yet every one of these interactions now seemed tainted down to the very source. Even in the OCD forums where I’d found some small element of community support in the midst of it all, I would see users angrily post about the flippant misuse of “OCD” as a term to describe mundane things like stacking newspapers on a coffee table and think to myself, “but don’t they know?” After all, what if they’d gotten it all wrong in those backroom negotiations? Certainly, this was no blatant fabrication, but where were the blind spots? What fell through the cracks?

Despite these concerns, the release of the DSM-III on January 1st, 1980 was the watershed moment for the modern view of psychiatry and mental health, ringing in the new decade with far-reaching effects within months of its release that would ingrain the model in the psychiatric field with a permanence that is still being reckoned with today and for a long time to come. It quickly became viewed as the authoritative text in the field, selling more copies in the first six months than all of the previous editions and reprints combined.

While the DSM-III only modestly increased the reliability of diagnosis, it did so by providing the first common, standardized language for mental health clinicians and researchers to define mental illness when dealing with patients, colleagues, insurance providers, and funders of research. Talk therapy became the territory of other mental health clinicians while psychiatrists retained complete control over the growing field of pharmacological therapy. Where large-scale clinical research was impossible in the previous two iterations, researchers and pharmaceutical companies could now use DSM-III disorders to target research and develop specific treatments and drugs (there were major financial incentives for pharmaceutical companies ingrained in the DSM-III model). It was sanctioned by key psychiatric institutions and became the basis of medical schools and residency programs by the early 1980s. Journal manuscripts and research proposals needed to be based on its language in order to be seriously considered. Government spending on psychiatric research exploded. It was clear that the DSM-III had made groundbreaking changes in the field, but it had its critics even then. They claimed it was reductionist, adynamic, and not representative of the totality of psychiatric thinking, but these critiques were largely inconsequential. The DSM-III-R and DSM-IV would only reaffirm the manual’s commitment to the new direction.

Untouchable as it may seem, the winds of change brought on by the DSM-III just might come back around to topple it. By the time word began to spread about the creation of the DSM-5, the manual once again found itself on unstable ground. The growing body of research brought on by the DSM-III’s revolution — which had become largely focused on Neuroscience — revealed major discrepancies between the DSM model and their findings. There was major overlap between the DSM’s discrete disorders in the data and the illnesses were not linked to specific biological markers as Kraepelin had theorized, but rather broad genetic vulnerabilities. Neither of these findings were taken into account with a categorical model. As a result, the DSM-5 task force set out with the seemingly impossible goal of recreating the revolution in psychiatric thinking of its predecessor by developing a new, dimensional model actually based on the science this time.

Nearly all of the major task force members from previous iterations were excluded from the creation of the DSM-5, and where previous revisions were handled on a fairly rigid hierarchical structure, the new workgroups were granted nearly limitless freedom in developing their proposed models. While this approach was taken in the hopes of fostering revolutionary changes, this decentralization created an atmosphere of chaos and internal divisions in the task force.

In addition, the dimensional models being developed by the research-oriented work group members drew fierce condemnation from clinicians for their complexity and worries that it would affect reimbursement. In response to the growing discontent with the DSM-5 revision, the APA appointed strict oversight committees that all but ensured the demise of the manual’s lofty goals. Nearly all of the DSM-IV’s existing model remained in the final revision with some sections being copied word for word, leaving mainstream psychiatric thought swimming in the flaws of the status quo at the behest of financial and practical influences.

As exciting as another overnight revolution in psychiatric thought might be in theory, barring some earth-shattering breakthrough, it’s unlikely we’ll see anything quite as transformative as what went down in 1980 any time soon. The conditions just aren’t right. The disarray of the pre-DSM field set the stage for a unifying theory to swoop in and take the reins, but standardization — despite its benefits — has come with a price. The DSM and the systems that have sprung forth from its wells have become — in every sense of the word — an institution, and any attempt to dismantle it would require no less resolve than the storming of the Bastille. Still, it seems as though word is beginning to spread of chinks in the manual’s proverbial armor, and some have taken up arms.

A week before the initial publication of the DSM-5 in 2013, the National Institute of Mental Health, the APA’s longtime partner, stated that they would no longer be funding research utilizing the DSM categories, deriding the model for its refusal to evolve. In its stead, they announced a new diagnostic research system — the research domain criteria, or RDoC — based on biomarkers and neurobiology, ushering in a new era of psychiatric exploration that threatens to upend the DSM’s place on the pedestal. The demand for a new, dimensional model of mental health has opened up the field of psychiatry once again to a regime change, though any challenger that comes to bat will need to somehow address the daunting web of influences securing — for now - the DSM in place.

One such newcomer, a grassroots consortium formed in 2015, has developed a new diagnostic system entitled the Hierarchical Taxonomy of Psychopathology, orHiTOP, which seems to fulfill nearly all of the characteristics of a fully realized continuum based on raw, empirical data. Organized from the bottom up, the model operates on a series of hierarchical levels of increasing generality with the lowest level being populated by a comprehensive list of symptoms and traits. Closely related symptoms are filtered into increasingly general groups that capture the dimensional nature of how mental illnesses manifest. At the top of the hierarchy sits the p-factor, a cutting-edge theoretical concept that represents an individual’s overall likelihood to experience some form of mental illness during their lifetime. As a result, this model not only offers a more holistic view of individual patients for psychiatric research, it also opens up the possibilities for revolutionary advances in how we understand mental health as a whole.

In clinical practice, rather than a patient receiving a distinct diagnosis after a brief psychological examination, they would be given a profile of dimensions — a report card of sorts — providing ranges of severity across the entire hierarchical continuum. This profile would be derived from an all-inclusive clinical and psychological examination administered by the treating physician. While the consortium is currently in the process of developing a fully realized diagnostic test to accompany the model in clinical settings, they have assembled a battery of existing tests that fill in the gaps in the meantime, opening up the model for current researchers and clinicians to explore. In an attempt to assuage practical clinical concerns like actually being able to decipher the results of the tests and how to be reimbursed through insurance for using the model, they’ve also created a series of practical training manuals and resources for clinicians. It seems the HiTOP consortium are covering their bases, but this model does not come without its own concerns.

The model is, like the DSM, still a descriptive model focused on symptoms rather than causes, after all. The consortium acknowledges this shortcoming, but they maintain that while an etiological model might ultimately be more useful, valid descriptive models have significant value to the scientific community in the meantime. As examples, they cite the Linnaean model of classifying animals before the theory of evolution, the Copernican model before Newton’s theory of gravity, and the periodic table before the Bohr model. But, as a result, the model cannot currently discern two similarly presenting cases with hypothetically different causes. Still, they claim the model is able to evolve to account for these occurrences should the evidence arise.

Even so, the model is still in its relative infancy with ongoing validity and field trials, so it’s too early to crown HiTOP as the new order even though the initial results look promising on a number of fronts. But in the meantime, what is the average person looking to better understand their own mental health to do? After all, every question seems to lead to another — or the revelation that I was simply asking the wrong question in the first place. What are we to make of this disconcerting reality in which we simply do not fully understand the mechanisms underlying mental illness and health? It is a daunting barrier in human understanding of the mind and our lived experiences that we have yet to break through. And consider the framework we’re working with now, the one that has ingrained itself so deeply in modern Western culture. With its flaws and controversies, should we view our diagnoses as meaningless?

Imperfect as it may seem, we can’t simply condemn the DSM model. After all, we owe much of the groundbreaking psychiatric research of the last 40 years to its standardized language, and many amazing treatment breakthroughs have occurred under its watch, from the rise in effective cognitive-behavioral and modern exposure therapies to the discovery of the gut-brain connection that is reshaping our understanding of the role gut microbiomes play in our mental health. The ubiquity of DSM diagnoses in Western culture continues to reduce the stigma around mental illness, and an eagerness to discuss these topics will only lead to greater investment of our time, our interest, and our money.

And considering some of the most recent developments in the field, it’s hard not to be optimistic about the future of mental healthcare. The rise of telehealth during the COVID-19 pandemic has drastically increased access to care, machine learning and wearable health devices are revealing a valuable new source of data, and the recent reappraisal of psychedelic drugs like psilocybin and ketamine has opened the door to a host of new and promising treatments for a variety of illnesses. In 2022, the United States even launched the first federally designated phone number for suicide prevention and mental health crisis, 988.

Even so, my mind wanders back to the hospital, to the scene through the wireglass, and to the frail woman shuttered away on the other side. At the time, I’d assumed I might never see her again, left to wonder what would come of her, but I’ve come to realize that’s not the case. In fact, I see her nearly every day: begging on the exhaust-riddled median of a busy intersection, swaddled in tattered robes on a park bench, simultaneously omnipresent and absent from the public eye. I told myself I saw her plight, that I wished I could help, and yet, in the absence of sealed metal doors, I find myself averting my eyes. And what a shame that is.

In the darkest days of my illness, I felt alone, like nobody was taking my experience seriously, and yet I had food, shelter, and a solid support system that stood by me even if they didn’t quite understand. It was in this environment that I was able to regain my footing, to chart a course — however flawed — toward stability. Without it, I don’t know if I’d be here today. Can we even begin to tackle these issues while so many fester in isolation without the resources to simply stay afloat long enough to find their way? It’s easy to tout the marvels of a good tug on the bootstraps with clean socks and a shower down the hall. Perhaps then I’m asking the wrong question once again. Like physicists looking to the stars, hoping to uncover the secrets of the universe, it’s a noble task to seek a better understanding of the human mind, and perhaps someday we will beach on those far-off shores. But is it not nobler still to seal the cracks along the way?

Previous
Previous

The Hoes Would Like a Word (Music Review)

Next
Next

The Next Round's on Us