The theory of Task Dynamics provides a method of predicting articulatory kinematics from a discrete phonologically-relevant representation (“gestural score”). However, because the implementations of that model (e.g., Nam et al., 2004) have generally used a simplified articulatory geometry (Mermelstein et al., 1981) whose forward model (from articulator to constriction coordinates) can be analytically derived, quantitative predictions of the model for individual human vocal tracts have not been possible. Recently, methods of deriving individual speaker forward models from real-time MRI data have been developed (Sorensen et al., 2019). This has further allowed development of task dynamic models for individual speakers, which make quantitative predictions. Thus far, however, these models (Alexander et al., 2019) could only synthesize limited types of utterances due to their inability to model temporally overlapping gestures. An updated implementation is presented, which can accommodate overlapping gestures and incorporates an optimization loop to improve the fit of modeled articulatory trajectories to the observed ones. Using an analysis-by-synthesis approach, the updated implementation can be utilized: (1) to refine the hypothesized speaker-general gestural parameters (target, stiffness) for individual speakers; (2) to test different degrees of temporal overlapping among multiple gestures such as a CCVC syllable. [Work supported by NSF, Grant 1908865.]