According to the World Health Organization, more than 264 million people worldwide suffer from Major Depression Disorder (MDD) and another 20 million have schizophrenia. MDD and schizophrenia are among the most common precursors to suicide and, according to a 2018 CDC report, suicidality is the second leading cause of death in youth and young adults between 10 and 34 years of age. While suicidality has historically been at low rates in the black community, it has recently become a crisis for black youth. It is the second leading cause for the death of black children between 10 and 14 years of age, and it is the third leading cause of death for black adolescents between 15 and 19 years of age. Our work is focused on understanding how a person's mental health status is reflected in their coordination of speech gestures. Speech articulation is a complex activity that requires finely timed coordination across articulators, i.e., tongue, jaw, lips, velum, and larynx. In a depressed or a psychotic state, this coordination changes and, in turn, modifies the perceived speech signal. In this talk, I will discuss a speech inversion system we developed that maps the acoustic signal to vocal tract variables (TVs). The trajectories of the TVs show the timing and spatial movement of speech gestures. Next, I will discuss how we use machine learning techniques to compute articulatory coordination features (ACFs) from the TVs. The ACFs serve as an input into a deep learning model for mental health classification. Finally, I will illustrate the key acoustic differences between speech produced by subjects when they are mentally ill relative to when they are in remission and relative to healthy controls. The ultimate goal of this research is the development of a technology (perhaps an app) for patients that can help them, their therapists and caregivers monitor their mental health status between therapy sessions.Q&A will follow the presentation