In this paper, we modify the Gabor feature extraction process, while applying the Gabor filters on the power-normalized spectrum and concatenating with power normalized cepstrum coefficients (PNCC), for noise robust large vocabulary continuous speech recognition. In Chang et al., ICASSP (2013), a similar Gabor filter bank (GBFB) feature set with multi-layer perceptron (MLP) processing (to reduce the feature dimension) has been used with mel frequency cepstrum coefficients showing improvements on Aurora-2 and renoised Wall Street Journal corpora. On a subset of the Aurora-4 database (only male), our method has shown promising results (when using PCA) being 7.9% better than 39-dimensional PNCC features. But, the GBFB features are a rich representation of the speech spectrogram (as an overcomplete basis), and an appropriate dimension reduction/manifold learning technique is the key to generalizing these features for the large vocabulary task. Hence, we propose the use of Laplacian Eigenmaps to obtain a reduced manifold of 13 dimension (from a 564-dimensional GBFB feature set) for the training dataset with a MLP being used to learn the mapping so that the same can be applied to out-of-sample points, i.e., the test dataset. The reduced GBFB features are then concatenated with the 26-dimension PNCC plus acceleration coefficients. This technique should lead to better accuracies as speech lies on a non-linear manifold rather than a linear feature space. [This project was supported in part by DARPA.]