Global localization is a key problem in autonomous robot. We use semantic map and object detection to do the global localization with MLE method.
- Jetpack=3.3
- TensorFlow=1.9.0
- Opencv
- Matplotlib
- PIL
- ROS Kinetic
- Jetson TX2
- Lidar
- USB camera
- Autonomous robot
- Odometry by encoder or IMU
- In ROS system, if we use move_base package, we need to input an 2D initial pose by hand:
- Therefore, we want to calculate the initial pose automatically.
- train an object detection model using tensorflow
- export the frozen model, and put it into
frozen_modelfolder - put the whole package into a ROS workspace
- we build a semantic map with Gmapping and object detection.
- the backgroud is the grid map, and the points in the map represent the object position.
Before initial pose, you need to run the following node in ROS
- map server to output a map
- robot control: publish the cmd_vel and subscribe Odometry
- Lidar like Hokuyo, output the
scandata
- Run
python initial_pose.pyinscriptsfolder. - subscribe
scan,imu/datatopic, and need a USB camera - publish
cmd/velto rotation,webcam_image, and the finalinitialpose
camera_save.py: simple script to save the camera imagevisilize.py: an example script to test the frozen model with a videosend_goal.cpp: we also provide a function which can send the navigation goal through voice recognition. Here we use the baidu package: https://github.com/DinnerHowe/baidu_speechcenter path: you need to alter thegrid_path.cppand input your own path in global planner package of navation stack.




