Mental health disorders affect one in four adult Americans and have a staggering impact on the economy. Improved assessment and early detection of mental health state can reduce this impact. Voice analysis has been linked to depression and other mental health disorders. However, difficulties in data collection, variation in collection methods, and computational demands of analysis methods have limited the use of voice in mental health assessment. The pervasive use of smartphones offers a unique opportunity to understand mental health symptoms and speech variation in large groups. We have developed open source mobile applications to collect data and developed voice-based algorithms for predicting mental health state in major depressive disorder and Parkinson disease. We demonstrate the use of biophysical speech production models in creating and improving features for machine learning, in contrast to traditional approaches for feature extraction. A model-based approach fuses prior knowledge of the system with the input data, constrains the space of parameters to biophysically realistic values, and reduces overall prediction error. This joint work with MIT Lincoln Laboratory and Sage Bionetworks couples mobile sensors to effective feature extraction and prediction models to enable a scalable approach for estimating individual variation in mental health disorders.