As the collection of large acoustic datasets used to monitor marine mammals increases, so too does the need for expedited and reliable detection of accurately classified bioacoustic signals. Deep learning methods of detection and classification are increasingly proposed as a means of addressing this processing need. These image recognition and classification methods include the use of a neural networks that independently determine important features of bioacoustic signals from spectrograms. Recent marine mammal call detection studies report consistent performance even when used with datasets that were not included in the network training. We present here the use of DeepSqueak, a novel open-source tool originally developed to detect and classify ultrasonic vocalizations from rodents in a low-noise, laboratory setting. We have trained networks in DeepSqueak to detect marine mammal vocalizations in comparatively noisy, natural acoustic environments. DeepSqueak utilizes a regional convolutional neural network architecture within an intuitive graphical user interface that provides automated detection results independent of acoustician expertise. Using passive acoustic data from two hydrophones on the Ocean Observatories Initiative’s Coastal Endurance Array, we developed networks for humpback whales, delphinids, and fin whales. We report performance and limitations for use of this detection method for each species.