In a mobile environment, automatic speech recognition (ASR) systems are being used for information retrieval. Due to the presence of multiple speakers, noise sources and reverberations in such environments, the ASR performance degrades. Here, the problem of improvement of ASR performance by separating the convolutively mixed speech signals that predominantly exist in mobile environments is addressed. For the separation, an extension of the algorithm published in A. Ossadtchi and S. Kadambe [‘‘Over‐complete blind source separation by applying sparse decomposition and information theoretic based probabilistic approach,’’ ICASSP, 2001] is applied in the time–frequency domain. In the extended algorithm, the dual update algorithm that minimizes L1 and L2 norms simultaneously is applied in every frequency band. The problem of channel swapping is also addressed. The experimental results of separation of convolutively mixed signals indicate about 6‐dB SNR improvement. The enhanced speech signals are then used in GMM based continuous speech recognizer. The recognition experiments are performed using the real speech data collected inside a vehicle. During the presentation, complete ASR performance improvement results will be provided.