object detection

Machine Vision & Intelligence

Collaborative Vehicular Vision

Principal Investigator
Sujit Dey, Truong Nguyen
Research Students
Sam Thornton, Ji Dai

This project focuses on improving the accuracy of object recognition and situational awareness of connected and autonomous vehicles by having multiple vehicles and smart street IoTs (like smart lights and smart intersections) combine their sensor data through a mobile edge computing (MEC) node located at a nearby roadside unit (RSU). A vehicle, even one with the most sophisticated array of sensors, can only perceive a limited area around itself; this vision is further impacted by possible adverse weather and light conditions and occlusions created by the presence of other road users, like other vehicles and pedestrians, as well as buildings and road infrastructure.

Autonomous Driving Sensors

Machine Vision & Intelligence

Cooperative Driving with Highly Improved Mapping & Localization, Perception, and Path Planning

Principal Investigator
Dinesh Bharadia, Tara Javidi
Research Students
Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Ish Jain, Yeswanth Reddy

Autonomous and automated driving requires sensing of the environment based on which actions related to driving are performed. V2V wireless communications can significantly improve the vehicles’ sensing ability and solve many of these challenges (non-line of sight, long range or bad weather sensing) by combining of sensing and driving information collected from multiple cars (cooperative data). In summary, this project aims at characterizing the cooperative sensing gain via optimized information acquisition strategies enabled by wireless links between vehicles.

Autonomous Driving Sensors

We would like to use multi-modal data from in-vehicle sensors such as 3D and infrared sensors from Qualcomm, and contextual information, together with new data fusion, machine vision and artificial intelligence techniques, to identify accurately the identity of a driver, as well as his/her intent. While face recognition techniques have been developed using 3D and infrared sensors, the accurate identification of a driver may still be a challenge in various circumstances. Moreover, while techniques have been developed for pose and gaze estimation, as well as gesture recognition, understanding driver intent accurately is still far from reality. Hence, with the accuracy and capabilities provided by new available sensor technologies, as well as abilities to capture contextual information about both the driver inside the vehicle as well as the road conditions outside the vehicle, we propose to develop a robust system to analyze and estimate a driver’s intent, which can have significant impact on enabling intelligent assistance/guidance for the driver’s vehicle as well as other neighboring vehicles.