Create Your First Project
Start adding your projects to your portfolio. Click on "Manage Projects" to get started
Real-Time Driver Behavior Detection Using AI and Machine Vision
Date
JAN 2025 - PRESENT
Location
Human Systems Lab, University of Windsor
Project type
SELF DRIVING VEHICLES
Assessment
Glance allocation, Driver activity, Awareness
At Human System Labs, I lead an innovative project to enhance road safety by developing a real-time driver behavior detection system. Leveraging advanced AI and machine vision technologies, this project focuses on analyzing driver attention and minimizing distractions to improve safety in dynamic driving environments. Using YOLO and MediaPipe, I built software to monitor glance allocation and detect distractions, integrating generative AI APIs to provide real-time insights into driver behavior patterns across various driving modes. This work advances human-machine interaction by ensuring safer, more attentive driving experiences.
YOLO was used to identify objects such as mobile phones and food items. Mediapipe was used to identify posture and body parts such as hand, face and arms. A combination of both was then used to determine if the activity the driver was engaged in. Mediapipe face mesh was used to determine glance allocation based on the movements of the head and tracking the iris.
Key Contributions:
-Designed and implemented a machine vision system using YOLO and MediaPipe for real-time driver monitoring.
-Integrated generative AI APIs to analyze glance allocation and distraction levels, enhancing road safety analytics.
-Conducted in-depth studies of driver behavior patterns across manual, semi-automated, and automated driving modes.
-Developed methodologies to ensure robust data collection and analysis, contributing to actionable safety recommendations.









