Abstract:
Visually impaired people face many challenges daily and depend on others a lot. They have more difficulty finding the
target object, understanding it, and even navigating their internal surroundings. This paper highlights the development of a system
designed to enhance the mobility and independence of blind and visually impaired individuals by recognizing their environment and
navigating safely. The proposed method aims to facilitate everyday tasks by identifying objects and providing audio feedback for
navigation and object detection, through a robotic body that can be worn on the head consisting of a mini-pc, a camera, and a
headset. The system uses a combination of artificial intelligence algorithms, cameras, and audio feedback mechanisms to interpret
visual data. YOLOv7 algorithm is used to detect indoor environments and use triangle similarity for distance estimation and alerting,
with an algorithm proposed to provide directions during navigation. The F1 score of the system evaluation reveals that the system
has an accuracy of 92.2%, indicating a good balance between precision and recall and showing high performance in every class. The
system operates in two modes: detection and navigation, and provides audio feedback to interact with the user. This innovation seeks
to address the challenges of spatial awareness and navigation for the visually impaired, with the ultimate goal of improving their
quality of life and independence.