Smartphones became everyday “companions” of humans. Almost everyone has a smartphone in their pocket, or bag, and use it on daily basis. Modern smartphones are “loaded” with sensors, providing streams of, potentially useful, data. Simultaneously, staying fit, exercising, running, swimming, etc. became fashionable. In this “climate”, employers can try to incentivise their workers, for instance, to use bicycles to come to work. Here, one of interesting questions becomes: are workers actually using bicycles, as declared, or do they try to subvert the system and win prizes, while, for instance, using public transport. One of the ways to check this could be to use data from smartphone sensors to determine the mode of transportation that has been used.

This paper presents preliminary results of an attempt at using raw sensor data and deep learning techniques for transportation mode detection, in real-time, directly on smartphone. The work tries to balance sensor power consumption and computational requirements with prediction correctness and response time. In this context, results of application of recurrent neural networks, as well as more traditional approaches, to a set of actual mobility data, are presented. Furthermore, approaches that leverage domain knowledge, in order to make classifiers more reliable and requiring less processing power (and less energy), are considered.

1.
L.
Stenneth
,
O.
Wolfson
,
P. S.
Yu
, and
B.
Xu
, “Transportation mode detection using mobile phones and GIS information,” in
Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS ’11
(
ACM
,
New York, NY, USA
,
2011
), pp.
54
63
.
2.
M.-C.
Yu
,
T.
Yu
,
S.-C.
Wang
,
C.-J.
Lin
, and
E. Y.
Chang
, “Big data small footprint: The design of a low-power classifier for detecting transportation modes,” in
Proceedings of the VLDB Endowment
, Vol.
7
(
VLDB Endowment
,
2014
), pp.
1429
1440
.
3.
S.
Reddy
,
M.
Mun
,
J.
Burke
,
D.
Estrin
,
M.
Hansen
, and
M.
Srivastava
(
2010
)
Using mobile phones to determine transportation modes
,
ACM Transactions on Sensor Networks
6
, 13,
27
p.
4.
M.
Shafique
and
E.
Hato
(
2015
)
Use of acceleration data for transportation mode prediction
,
Transportation
42
,
163
188
.
5.
F.
Attal
,
S.
Mohammed
,
M.
Dedabrishvili
,
F.
Chamroukhi
,
L.
Oukhellou
, and
Y.
Amirat
(
2015
)
Physical human activity recognition using wearable sensors
,
Sensors
15
,
31314
31338
.
6.
D. J.
Patterson
,
L.
Liao
,
D.
Fox
, and
H.
Kautz
, “Inferring high-level behavior from low-level sensors,” in
UbiComp 2003: Ubiquitous Computing
, edited by
A. K.
Dey
,
A.
Schmidt
, and
J. F.
McCarthy
(
Springer
,
Berlin-Heidelberg
,
2003
), pp.
73
89
.
7.
M.
Nikolic
and
M.
Bierlaire
, “
Review of transportation mode detection approaches based on smartphone data
,” in
Proceedings of the 17th Swiss Transport Research Conference
(
2017
).
8.
T. H.
Vu
,
L.
Dung
, and
J.-C.
Wang
, “Transportation mode detection on mobile devices using recurrent nets,” in
Proceedings of the 2016 ACM on Multimedia Conference, MM’16
(
ACM
,
New York, NY, USA
,
2016
), pp.
392
396
.
9.
T.
Sonderen
, “
Detection of transportation mode solely using smartphones
,” (
2016
).
10.
S.
Hemminki
,
P.
Nurmi
, and
S.
Tarkoma
, “Accelerometer-based transportation mode detection on smart-phones,” in
Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems, SenSys’13
(
ACM
,
2013
), paper 13, 14p.
11.
P.
Siirtola
and
J.
Röning
(
2012
)
Recognizing human activities user-independently on smartphones based on accelerometer data
,
International Journal of Interactive Multimedia and Artificial Intelligence
1
,
38
45
.
12.
F.
Rosenblatt
(
1958
)
The perceptron: a probabilistic model for information storage and organization in the brain
,
Psychological Review
65
,
386
408
.
13.
R.
Mittelman
, “
Time-series modeling with undecimated fully convolutional neural networks
,” arXiv:1508.00317, 2015.
14.
A.
Graves
,
A.
Mohamed
, and
G. E.
Hinton
, “
Speech recognition with deep recurrent neural networks
,” CoRRabs/1303.5778,
2013
, arXiv:1303.5778.
15.
A.
Graves
, “
Sequence transduction with recurrent neural networks
,” CoRRabs/1211.3711,
2012
, arXiv:1211.3711.
16.
A.
Graves
,
S.
Fernández
,
F.
Gomez
, and
J.
Schmidhuber
, “
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks
,” in
Proceedings of the 23rd International Conference on Machine Learning, ICML’06
(
ACM
,
New York, NY, USA
,
2006
), pp.
369
376
.
17.
D.
Eck
and
J.
Schmidhuber
, “
Finding temporal structure in music: blues improvisation with LSTM recurrent networks
,” in
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
, edited by
H.
Boulard
(
Martigny
,
Valais, Switzerland
,
2002
), pp.
747
756
.
18.
Y.
Bengio
,
P.
Simard
, and
P.
Frasconi
(
1994
)
Learning long-term dependencies with gradient descent is difficult
,
IEEE Transactions on Neural Networks
5
,
157
166
.
19.
S.
Hochreiter
and
J.
Schmidhuber
(
1997
)
Long short-term memory
,
Neural Computation
9
,
1735
1780
.
20.
S.
Hochreiter
,
Y.
Bengio
,
P.
Frasconi
, and
J.
Schmidhuber
, “Gradient flow in recurrent nets: the difficulty of learning long-term dependencies,” in
A Field Guide to Dynamical Recurrent Networks
, edited by
J. F.
Kolen
and
S. C.
Kremer
(
Wiley-IEEE Press
,
2001
).
21.
K.
Cho
,
B.
van Merrienboer
,
Ç.
Gülçehre
,
F.
Bougares
,
H.
Schwenk
, and
Y.
Bengio
, “
Learning phrase rep-resentations using RNN encoder-decoder for statistical machine translation
,” CoRRabs/1406.1078,
2014
, arXiv:1406.1078.
22.
J.
Chung
,
Ç.
Gülçehre
,,
K.
Cho
, and
Y.
Bengio
, “
Empirical evaluation of gated recurrent neural networks on sequence modeling
,” CoRRabs/1412.3555,
2014
, arXiv:1412.3555.
23.
M.
Schuster
and
K.
Paliwal
(
1997
)
Bidirectional recurrent neural networks
,
IEEE Transactions on Signal Processing
45
,
2673
2681
.
24.
D. E.
Rumelhart
,
G. E.
Hinton
, and
R. J.
Williams
, “Learning internal representations by error propagation,” in
Parallel Distributed Processing: Explorations in the Microstructure of Cognition
, Vol. 1, edited by
D. E.
Rumelhart
,
J. L.
McClelland
, and
C.
PDP Research Group
(
MIT Press
,
Cambridge, MA, USA
,
1986
), pp.
318
362
.
25.
D.
Mizell
, “
Using gravity to estimate accelerometer orientation
,” in
Proceedings of the 7th IEEE International Symposium on Wearable Computers, ISWC’03
(
IEEE Computer Society,Washington
,
DC, USA
,
2003
), pp.
252
253
.
26.
K.-C.
Jim
,
C. L.
Giles
, and
B. G.
Horne
(
1996
)
An analysis of noise in recurrent neural networks: convergence and generalization
,
IEEE Transactions on Neural Networks
7
,
1424
1438
.
27.
A. M.
Saxe
,
J. L.
McClelland
, and
S.
Ganguli
, “
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
,” CoRRabs/1312.6120,
2013
, arXiv:1312.6120.
28.
X.
Glorot
and
Y.
Bengio
, “
Understanding the difficulty of training deep feedforward neural networks
,” in
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research
, Vol.
9
, edited by
Y.W.
Teh
and
M.
Titterington
(
PMLR, Chia Laguna Resort
,
Sardinia, Italy
,
2010
), pp.
249
256
.
29.
F.
Chollet
 et al,
Keras
,
2015
, https://keras.io.
30.
D. P.
Kingma
and
J.
Ba
, “
Adam: A method for stochastic optimization
,” CoRRabs/1412.6980,
2014
, arXiv:1412.6980.
31.
S. J.
Reddi
,
S.
Kale
, and
S.
Kumar
, “
On the convergence of Adam and beyond
,” in
International Conference on Learning Representations
(
2018
).
32.
M.
Abadi
,
A.
Agarwal
,
P.
Barham
,
E.
Brevdo
,
Z.
Chen
,
C.
Citro
,
G. S.
Corrado
,
A.
Davis
,
J.
Dean
,
M.
Devin
,
S.
Ghemawat
,
I.
Goodfellow
,
A.
Harp
,
G.
Irving
,
M.
Isard
,
Y.
Jia
,
R.
Jozefowicz
,
L.
Kaiser
,
M.
Kudlur
,
J.
Levenberg
,
D.
Mané
,
R.
Monga
,
S.
Moore
,
D.
Murray
,
C.
Olah
,
M.
Schuster
,
J.
Shlens
,
B.
Steiner
,
I.
Sutskever
,
K.
Talwar
,
P.
Tucker
,
V.
Vanhoucke
,
V.
Vasudevan
,
F.
Viégas
,
O.
Vinyals
,
P.
Warden
,
M.
Wattenberg
,
M.
Wicke
,
Y.
Yu
, and
X.
Zheng
,
TensorFlow: Large-scale machine learning on heterogeneous systems
,
2015
, software available from tensorflow.org.
This content is only available via PDF.
You do not currently have access to this content.