OUR APP FEATURES

The usage of image processing and object identification technologies such as YOLOv5 models from TensorFlow, as well as a TTS engine for voice output, suggests a project based on quick development and frequent testing. This is compatible with Agile techniques, which require frequent delivery of functioning software and allow for modifications at any stage of the development process.

iphone Image

Flutter Design

Flutter is an open-source framework by Google for building beautiful apps. It uses a single codebase written in Dart, a programming language also created by Google.

Flutter for Voice Assistance

Neural Network is implemented using flutter packages in order to perform speech recognition.flutter allows anyone to utilize machine learning by providing the tools to train one's own neural network. For visually impaired People

Text-To-Speech

(TTS) engines converts written text into spoken words, enabling applications in accessibility, automated voice responses, and virtual assistants, among others.

YOLOv5 Models From Kaggle

Kaggle offers tflite Models which are from Tensorflow. TensorFlow lets you run machine learning tasks like image recognition or speech detection on mobile devices, even though they don't have a lot of power.

Watch Promo Video

APPS SCREENSHOT

Our goal is to enhance the independence and safety of visually impaired individuals by creating a navigation system that recognizes common indoor items like furniture and provides auditory guidance to avoid obstacles.

This isn’t just an app it’s a companion that speaks in the language of care and accessibility.

App screenshot img
App screenshot img
App screenshot img
App screenshot img
App screenshot img
App screenshot img
App screenshot img
App screenshot img

OTHER DOCUMENTS

These buttons provide access to key project documents: Such as Project Proposal, Ethics Form, Project Poster, and Progress Report.

FAQ

Visual impairment presents significant challenges to individuals, affecting their independence and overall quality of life. To address these challenges, the development of a smart mobile accessibility application that incorporates machine learning and image processing techniques has the potential to empower blind individuals. This introduction provides an overview of the importance of such an application in improving the accessibility and autonomy of visually impaired individuals. Creating a smart mobile accessibility application is one promising method to meet the demands of visually impaired people. Badave, Jagtap, Kaovasia, et al., have explored the use of an Android-based object detection system specifically designed for visually impaired individuals [4]. By using the power of technology, the application intends to improve the independence and autonomy of blind people. The program can understand and deliver auditory feedback on visual content collected by the smartphone's camera by utilizing machine learning algorithms and image processing techniques. Additionally, it provides auditory advice for navigation, allowing people to explore unfamiliar places with ease. The benefits of the smart mobile accessibility application are numerous. It enables blind people to engage in previously inaccessible activities such as exploring new areas, reading menus, identifying objects, and accessing digital content. The application improves their overall quality of life by giving real-time audio feedback, fostering independence, self reliance, and social inclusion. Test results for the smart mobile accessibility application have demonstrated its effectiveness in real-world scenarios. Several studies have reported high accuracy in object recognition, language interpretation, and navigation assistance [4]. User feedback indicates increased confidence and improved mobility for visually impaired individuals.
Problem

can a multifaceted accessibility ecosystem be designed and implemented to empower blind individuals by providing comprehensive solutions for autonomous travel, and shopping for market items, ensuring that they can overcome these challenges and enhance their autonomy in everyday life?


Blind individuals encounter multiple daily challenges that hinder their independence and accessibility in various aspects of life. They face difficulties in navigating public spaces and identifying market items. Existing assistive technologies and solutions often fall short in providing comprehensive support for these diverse needs. Therefore, there is a critical need to develop an integrated accessibility ecosystem that addresses these challenges collectively, aiming to empower blind individuals to lead more independent and inclusive lives.
Visual impairment presents significant challenges to individuals, affecting their independence and overall quality of life. To address these challenges, the development of a smart mobile accessibility application that incorporates machine learning and image processing techniques has the potential to empower blind individuals. This introduction provides an overview of the importance of such an application in improving the accessibility and autonomy of visually impaired individuals. Creating a smart mobile accessibility application is one promising method to meet the demands of visually impaired people. Badave, Jagtap, Kaovasia, et al., have explored the use of an Android-based object detection system specifically designed for visually impaired individuals [4]. By using the power of technology, the application intends to improve the independence and autonomy of blind people. The program can understand and deliver auditory feedback on visual content collected by the smartphone's camera by utilizing machine learning algorithms and image processing techniques. Additionally, it provides auditory advice for navigation, allowing people to explore unfamiliar places with ease. The benefits of the smart mobile accessibility application are numerous. It enables blind people to engage in previously inaccessible activities such as exploring new areas, reading menus, identifying objects, and accessing digital content. The application improves their overall quality of life by giving real-time audio feedback, fostering independence, self reliance, and social inclusion. Test results for the smart mobile accessibility application have demonstrated its effectiveness in real-world scenarios. Several studies have reported high accuracy in object recognition, language interpretation, and navigation assistance [4]. User feedback indicates increased confidence and improved mobility for visually impaired individuals.

System Architecture
The proposed solution involves the development of a versatile mobile application that integrates advanced image processing and voice communication technologies to empower blind individuals in their daily lives. This innovative app leverages image recognition to enable users to independently detect indoor object and identify market items by capturing and interpreting images, providing them with real-time audio descriptions and information. Additionally, it offers voice communication features, facilitating safe and convenient navigation, and enabling users to interact with the app through spoken commands, ensuring they can travel autonomously. Furthermore, this mobile app aims to create an inclusive and comprehensive solution that fosters greater independence and accessibility for blind individuals in diverse aspects of their daily routines

Project Test Video
Use of Image Processing: Image processing technology is used in all planned and current systems.This critical property allows systems to understand visual input and extract useful data from pictures.It serves as the cornerstone for various assistive technologies' capabilities.

Designed For The Blind: Again, all planned and current solutions are developed with blind people's special requirements in mind. This emphasis on accessibility guarantees that technology are prepared to solve the special issues that people with visual impairments experience. It has capabilities like as voice outputs and haptic interfaces.

Voice Command Identification: This feature allows users to interact with the system by speaking commands, which is a huge step forward in offering a smooth and intuitive user experience.This capability can considerably improve usability for people who struggle with standard interfaces.

Identification Of Total Text Material: Studies 1 and 2 can identify the complete text material.This implies they can identify and analyze text inside photos in a thorough manner. This functionality is important for jobs such as reading documents, signage, or other text-based information, as it gives users complete access to textual content in their surroundings.

Output As A Voice Message: This function is extremely useful for blind individuals since it allows them to hear information. This not only makes it easy to understand the material, but it also encourages independence by removing the need for support from others. This implies they can identify and analyze text inside photos in a thorough manner. This functionality is important for jobs such as reading documents, signage, or other text-based information, as it gives users complete access to textual content in their surroundings.

Smartphone Application: The simplicity of use and portability of the assistive technology are enhanced by this platform option, enabling users to carry it with them wherever they go. offer a wide range of traits that are geared toward the needs of those with vision impairments.
Environmental Recognition and Navigation for Supermarkets: To develop a robust and user-friendly mobile application that utilizes advanced object recognition techniques, such as YOLOv5, to empower visually impaired individuals to navigate supermarkets and recognize objects in real-time, thereby promoting greater accessibility and autonomy during shopping.

Include An Audio Feedback: The app should give the user auditory feedback to describe the environment and items in the image. This can involve employing text-to-speech (TTS) technology to translate text descriptions into audio.

Enhance Depth Perception and Barrier Recognition: The third specific objective focuses on refining the depth perception capabilities of the system and improving the accuracy of barrier recognition. This involves optimizing the image processing algorithms to provide accurate distance measurements to different objects and surfaces. Additionally, the algorithms should be designed to effectively differentiate between various types of barriers and obstacles, aiding the user in identifying potential hazards in their pat.

GET IN TOUCH

Phone Number

+94 75 927 8559

Email Address

azam.techofficial@gmail.com