Blind people face difficulty in practicing their daily life safely because they cannot know the objects surrounding them and this exposes them to many dangers. Given that the number of blind people around the world is a significant number, this research deals to build an intelligent system that helps the blind people to know the objects surrounding them in an internal environment where it is formed. The system consists of two parts, the first part is the software, which depends on the use of one of the deep learning algorithms and its name is YOLO (You Only Look Once). This algorithm was chosen because of its high speed and accuracy, and this is what this group of people needs, which the research aims to help them as much as possible. The algorithm was trained on a ready-made dataset called COCO (Common Objects in COntex) Dataset which It is used for the purposes of discovering objects and detecting faces … etc. The other part of the system is the part of the hardware, which mainly depends on a fast and lightweight microprocessor, which is Raspberry Pi B3, in addition to the headphone and Raspberry Pi camera. After the images is taken by the Raspberry Pi Camera, these images are send to the Raspberry pi and the YOLO algorithm detect the objects in each image, and after the object is detected, it sends the output that is converted into sound via the head Phone to the person and thus can avoid the things surrounding them.

1.
Global Data on Visual Impairments
2010
. Available online: http://WWW.Who.int/blindness/GLOBALDATAFINALforweb.pdf (accessed on 23 April 2017).
2.
Nada
,
M.
Fakhr
and
A.
Seddik
2015
Assistive Infrared Sensor Based Smart Stick for Blind People
,”
in Proceedings of IEEE Technically Sponsored Science and Information Conference London, UK
.
3.
Nada
,
SMashaly
,
M.
Fakhr
, and
A. Seddik
April
,
2015
Effective Fast Response Smart Stick for Blind People
,”
in Proceedings of the Second International Conference on Advances in Bio-Informatics and Environmental Engineering – ICABEE
.
4.
P. B. L.
Meijer
2005
A Modular Synthetic Vision and Navigation. System for the Totally Blind
.”
World Congress Proposals
.
5.
N.
Bourbakis
and
D. Kavraki
Novamber
,
1996
Intelligent Assistants for handicapped people's independence: case study
,”
IEEE International Joint Symposia on Intelligence and Systems, Rockville, MD
, p.
337
344
.
6.
N.
Bourbakis
and
P.
Kakumanu
2008
Skin-based Face Detection-Extraction and Recognition of Facial Expressions
.
In Applied Pattern Recognition
, vol.91.
7.
D.
Dakopoulos
and
N.
Bourbakis
, July
2008
Preserving Visual Information in Low Resolution Images During Navigation of Blind
,”
Proceedings of the 1st International Conference on PErvasive Technologies Related to Assistive Environment, Athens, Greece
.
8.
N.
Bourbaki
2008
, “
Sensing surrounding 3-D space for navigation of the blind - A prototype system featuring vibration arrays and data fusion provides a near real-time feedback
,”
IEEE Engineering in Medicine and Biology Magazine
, vol.
27
, Issue 1, pp.
49
55
.
9.
V. Santiago
Praderas
,
N.
Ortigosa
,
L.
Dunai
and
G. Peris
Fajarnes
, June
2009
Cognitive Aid System for Blind People (CASBliP
),”
Proceedings of XXI Ingehraf-XVII ADM Congress
p.
31
.
10.
S.
Sivaraman
and
M.M.
Trivedi
,
2013
Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis
,”
IEEE Trans.Intell. Transport. Syst.
14
(
4
), pp.
1773
1795
.
11.
L.
Shao
,
X.
Zhen
,
D.
Tao
and
X.
Li
,
2014
Spatio-temporal laplacian pyramid coding for action recognition
,”
IEEE Trans. Cybernet
, pp.
817
827
.
12.
W.
Liu
,
X.Z.
Wen
,
B.B.
Duan
and et al. 
2007
Rear vehicle detection and tracking for lane change assist
,”
Proceedings of the IEEE Intelligent Vehicles Symposium, Istanbul, Turkey
, pp.
252
257
.
13.
T.
Liu
,
N.
Zheng
,
L.
Zhao
and
H. Cheng
June
2005
Learning based symmetric features selection for vehicle detection
,”
Proceedings of the IEEE Intelligence and Vehicular Symposium
, pp.
124
129
.
14.
J.
Cui
,
F.
Liu
,
Z.
Li
, and
Z. Jia
June
2010
Vehicle localization using a single camera
,”
Proceedings of the IEEE IV
, pp.
871
876
.
15.
J.
Redom
,
S.
Farhadi
, December 2016 “
You Only Look Once: Unified, Real-time Objects Detection
”;
2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Electronic ISSN
:.
16.
J.
Redmon
and
A.
Farha
2018
Yolov3: An incremental improvement. arXiv
.
17.
Redmon
J.
,
Angelova
A.
.
Real-time grasp detection usingconvolutional neural networks // CoRR
.
2014
. abs/1412.3128.
18.
V.
Ordonez
,
G
,
Kulkarni
, and
T.
Berg
,’’
Im2text: Describing images
”.
19.
T.
Lin
,
M.
Maire
,
S.
Belondie
,
J.
Hays
,
P.
Perona
,
D. Ramanan P.
Dollar
, and
C. L.
Zitnick
2014
Microsoft COCO: Common Objects in Context” in ECCV
.
20.
J
Bahan
Pal
,
S
Agarwal
2019
Real Time Object Detection Can be Embedded on Low Powered Devices
”,
International Journal of Computer Sciences and Engineering
21.
Online
. https:// Pjreddie.com/darknet/YOLO/ (Accessed on 14 th July
2018
.
22.
Upton
,
Eben
, and
Gareth
Halfacree
2014
“Raspberry Pi user guide,”
John Wiley & Sons
.
23.
A.
Nayyar
,
V.
Puri
,
Raspberry Pi-A Small, Powerful, Cost Effective and Efficient Form Factor Computer: A Review
24.
Raspbian.[Online].Available
:http://www.raspbian.org/down.
This content is only available via PDF.
You do not currently have access to this content.