Deep learning, Computer Vision and self-driving cars

I develop GPU solutions for self-driving cars at NVIDIA. In my lab at UIUC, we research intelligent and dependable autopilot architectures for modern autonomy. Hobby-grade drones, R/C cars have been augmented with embedded Linux computers, networking and sensors. We use motion capture technologies for indoor positioning. We use deep learning and GPU acceleration to train Deep Neural Networks models. 

pnTurns.gif

Train robot cars to follow navigation and perform maneuvers.

Robotics and machine learning technologies are combined to develop self-driving cars towards L4 capabilities.

lidar.jpg

LiDAR and machine learning-based object avoidance

Here, a TensorFlow model is deployed on a rover which reads a 2D LiDAR measurements and is able to avoid dynamic obstacles.

LiDAR-object-avoidance.gif

Object avoidance using TensorFlow with MAVkit (See below)

Similar functionalities can be achieved with expert-engineered algorithms. The NeuralNets approach is characteristic; it mimics human driver behavior.

IMG_0081.JPG

End-to-End learning for self-driving car implemented with a model R/C vehicle.

A simpler CNN architecture based on NVIDIA's model has been trained to drive a self-driving model vehicle running MAVkit (see below). The TensorFlow model is running on an off-board GPU and steering commands are streamed to the MAVkit vehicle over network.

otg-mavkit.gif

Accessible Self-driving Car Testbed

Self-driving technologies will be revolutionary to human mobility. To support researchers and students who are the driving force of this revolution, low-cost testbed platforms have been developed. Thanks to the effort of open-source community and FPV hobbyists, a complete deep-learning self-driving platform can be built for low budget.

YOLO

YOLO with OTG real-time Computer Vision

Inspired by the FPV hobby community, MAVkit (see below) integrates an OTC receiver to stream low-latency video from a self-driving car camera enables real-time performance of computer vision for object detection and classification, a critical perception requirement by any autonomous system. (This example uses YOLOv2.)


Dependable autopilot architectures

Papers: VirtualDrone, ContainerDrone, ViconMAVLink.

As modern unmanned aerial systems (UAS) continue to expand the frontiers of automation, ever-increasing accessibility and capability of UAS expose new challenges to security and flight safety.

Modern intelligent UAS integrate high-performance computing platforms and sophisticated software because of the computation-intensive jobs running on-board. The high system complexity requires a dependable design of UAS system to guarantee computation correctness and flight safety.

I am interested in modern autopilot architectures that can deliver high performance, robustness and resilience to software fault and cyber attacks.

multicore-autopilot.png

Modern high-performance, cyber attack-resilient, fault- tolerant and safety-assured UAS autopilot.

We develop autopilot systems that utilize multi-core computing hardware for performance, data-driven adaptive control and system monitoring algorithms for fault detection, protection and recovery of mission-critical modules for system safety.

Together with my colleague Dr. Man-Ki Yoon and directed by Prof. Lui Sha who serves NASA advisory council, we published two such autopilots using virtualization and container technologies, called VirtualDrone and ContainerDrone, respectively.:

The VirtualDrone architecture: Virtualization technology is used to encapsulate a user's autopilot interface on the trustworthy system, a host OS which actually runs the physical sensors. System safety and security can be achieved by monitoring mechanisms of the host.

To learn more, read our VirtualDrone paper, and ContainerDrone paper (To appear in DATE2019 conference, March 2019).


MAVkit: an autopilot framework for mobile robotics research and education

I am interested in open-source software and hardware for makers to build their own autonomous vehicles. Here is such a platform I developed that is used in universities (with Vicon indoor positioning integration).

One board, Many vehicles:  MAVkit is a complete networked autopilot system that allows multiple vehicle configurations on one embedded Linux computer, which provides separate file systems for vehilcles.

One board, Many vehicles: MAVkit is a complete networked autopilot system that allows multiple vehicle configurations on one embedded Linux computer, which provides separate file systems for vehilcles.

IMG_0019.JPG

MAVkit integrates indoor positioning using Vicon

Vicon high-fidelity positioning system provides ground-truth positional measurements for robots. ViconMAVLink is the software that simulates GPS and sends packets to Linux robots.

One of the biggest challenges in mobile robotics research is to build a versatile and reliable testbed that can be used as a platform to explore new intelligent robotics methodologies. MAVkit is an integrated framework of software tools and designs that can be installed on inexpensive embedded Linux computers. Combined with any off-the-shelf R/C models, the system performs as a capable physical platforms for mobile robotics research. Due to the low-cost feature of the system, MAVkit is also suited for classroom settings where multiple platforms need to be distributed to students for course projects.

jeep-combined.JPG

Sensing and autonomy

MAVkit integrates IMU, LiDAR, GPS, Vicon and Cameras, uses PX4 and Ardupilot as the backend, supports a unified user interface (command line and API) for multi-copters, fixed-wings, rovers and boats. The resulting system is a capable tool for mobile robotics research and education.


P1040301.JPG

A $40 raspberry pi computer and modern computer vision

MAVkit aims for inexpensive real-time embedded Linux computers, such as the raspberry pi. Those platforms do not come with fancy CUDA computing hardware. But no worries, a $15 OTG kit and a laptop will allow SOTA computer vision techniques to drive MAVkit vehicles.

IMG_0036.jpg

MAVkit communicating using radio

MAVLink packets can be sent via telemetry radio. With a 915MHz transceiver, MAVkit running on a groundcontrol Linux computer can command a UAV beyond visual line of sight.