To design and implement an end-to-end neural network for autonomous driving using simpler CNN framework.
- LGSVL simulator as driving simulator
- ROS2 nodes to receive sensor data from LGSVL Simulator
- Preprocessing and feature engineering using Python, OpenCV
- Keras/Tensorflow deep learning CNN model to train the data
- Containerised docker image to run simulation using the GPU
It is assumed that you have some experience in using Linux, Git, ROS, Docker, web browsers, and Python.
This project is the modified version of ROS2 End-to-End Lane Following Model with LGSVL Simulator which is in turn inspired by NVIDIA's End-to-End Deep Learning Model for Self-Driving Cars.
At the highest-level, the architecture consists of four modules: sensor module, data collection, pre-processing and training module, and finally evaluation module.
- Training mode: Manually drive the vehicle and collect data
- Autonomous Mode: The vehicle drives itself based on Lane Following model trained from the collected data
- ROS2-based
- Time synchronous data collection node
- Deploying a trained model in a node
- Data preprocessing for training
- Data normalization and manipulation
- Splitting data into training set and test set
- Writing/Reading data in HDF5 format
- Deep Learning model training: Train a model using Keras with TensorFlow backend
- Docker CE
- Nvidia-docker
- GPU(at least 8GB)
- ROS2. This project uses ROS2 Dashing.
- Tensorflow, Keras
- Python3
- LGSVL Simulator
In the terminal at appropriate location run this command -
git clone https://github.com/dr563105/basicad_framework.git
Then in the terminal run this command -
docker pull lgsvl/lanefollowing:latest
- Download the zipped modified simulator(filename: NewSimLFMay201.tar.gz) from my personal google drive.
- Untar it and place the folder(NewSimLFMay201) along with the other folders from the repository.
- Navigate to
NewSimLFMay201->simulator->build. - Run simulator executable.
- Click
Open Browser - On the first login, create an account. Login. It should automatically download the default maps, vehicle, and simulation configurations.
- Navigate to
Vehiclestab and create a new setup by clickingAdd newbutton. - Name the vehicle and use
this
as vehicle url. Click
Submit. - Then click the wrench icon adjacent to the newly created vehicle profile. Make the
Bridge Typeas ROS2. Copy the contents from here to theSensorsfield. ClickSubmit. Add/remove as necessary. Please note that the camera sensors transmit images in 1920x1080 resolution. So they are huge and can bottleneck port bandwidth. - Move to the
Simulationstab, clickAdd newbutton and a popup menu will appear. In this menu'sgeneraltab, give a simulation name. - In the
Maps & Vehiclestab, checkinteractive mode. ChooseSanFranciscoas map. - Next in the
Select Vehicles, select the vehicles tab configuration name from the dropdown and writelocalhost:9090as IP address. Other tabs can be left empty for the moment. ClickSubmit. - Select the configured simulation and click
playbutton. If there are no errors, the simulation should execute and in the simulation application we can see a vehicle spawned in SF map. Click theplaybutton to make the LGSVL simulator ready.
- Go to ROS2 workspace
testoutLF->lanefollowing->ros2_ws. - Execute the build command in the terminal
docker-compose up build_ros
- You should see two more folders -- build and log created.
While in the ros2_ws, run this command in the terminal
docker-compose up collect
- You should see the
rosbridgeis now connected. This can be verified by going to the settings in the LGSVL simulator app. - The script inside
collect_scriptis copied into the~/ros2_ws/lane_following/collect.py. The topic names of the nodes must match the topics mentioned in the vehicles tab of LGSVL WebUI. - To exit, simply use
ctrl+cin the terminal.
- To evaluate, the necessary evaluation script present inside
evaluation_scriptmust be copied into~/ros2_ws/lane_following/drive.py. - The trained models and their architectures are available inside
models_images_and_filesfolder in the repo. - Take the
*.h5hdf5 file and place it inside the folderros2_ws/src/lane_following/model/. It contains the trained models accessed by the evaluation script. - The evaluation is run by executing this command in the terminal
docker-compose up drive_visual
- A small window will pop up. Now, go back to LGSVL simulator application and gently
accelerate using the
Uparrow key. The vehicle will then autonomously navigate. Toggle traffic and change weather conditions as necessary. - To exit, simply use
ctrl+cin the terminal.
- For pre-processing, the scripts present inside
preprocessing_scriptare used. For this step, a data folder is required. Because of data collected is of large size, this is not provided. - Similarly, for training, scripts inside
training_scriptare used. - For convenience and CUDA dependencies, both the scripts are advised to be executed outside the docker.
- Light plays an important factor in a vehicle recognising its surroundings.
- RGB-Greyscale image is sufficient for steering.
- Fusing techniques help in night-time driving and collision avoidance.
- Data is imbalanced so appropriate cross validation techniques need to be applied.
- Robotic Operating System(ROS)
- LGSVL simulator
- Docker CE and also its post installation.
- steps
- Nvidia docker
- Use Conda environment for training. Conda automatically installs Tensorflow and CUDA dependencies and saves the headache of choosing which CUDA version is compatible with a particular version of tensorflow.
- Always set ROS2 environment when running ROS related commands.
source /opt/ros/dashing/setup.bashand from the ros2wssource install/setup.bashfor local setup. - Thesis report folder contains the pdf of the report.
- Further instructions can be found here and here.
