The objective of this project is to develop autonomous, mobile robot ML algorithms that must navigate a Turtlebot3 through an environment to capture a care package
- Filtering and cleaning sensor data
- Object Detection
- Mapping
- Control
data: folder containing the data from the simulationsraw_dataset_lidar.csv: lidar sensor dataraw_dataset_odom.csv: encoder datamerged_lidar_odom_data.csv: merx`ged data
featuresradar_plot_frames: contains frames of the lidar dataplotting.ipynb: (Depricated) preprocessing data
figs: Folder containing figures for the reportgantt.png: GANTT chartobjective_fig.png: Project objective graph
preprocessing: Preprocessing of datalidar_preprocessing.ipynb: Notebook containing the pipeline to generate the radar graphcluster_objects.ipynb: Notebook that uses the raw lidar data and find the optimal parameters to clustericp.ipynb: Notebook that implements ICP algorithmDBSCAN.py: Python script containing the DBSCAN class and functionsradar_plot.gif: Image of the simulation data from the LIDAR sensorsample_DBSCAN_clustering.png: Figure of the sample DBSCAN clustersoptimal_DBSCAN_clustering.png: Figure of the optimal DBSCAN clusters
reports:figs: Folder containing figures for the reportgantt.png: GANTT chartobjective_fig.png: Project objective graph
main.pdf: PDF version of the reportmain.tex: Latex file of the report
src: folder containing the ML pipelinesml_robotics_interfaces: ROS interfaces codeml_robotics_project: ROS program filesml_robotics_project: ROS program codealgorithm_handlers: Translate beteween ROS and MLIPpoHandler.py: Interface for PPO handlersISlamHandler.py: Interface for SLAM handlersIYoloHandler.py: Interface for YOLO handlersPpoHandler.py: Handler for PPO algorithmSlamHandler.py: Handler for SLAM algorithmYoloHandler.py: Handler for YOLO algorithm
ml_algorithms: ML algorithmsSlam.py: AStar algorithm classYolo.py: Node that performs SLAM using lidar and odometry dataPPO2: All PPO nodes, algorithms, and data
ros_nodes: Classes that connect to the ROS environmentCameraNode.py: Node that captures camera images and feeds it into ROSYoloNode.py: Node that computes objective coordinates given an imageINode.py: Initializes variables and parametersPpoNode.py: Node that computes desired robot velocity given odometry and trajectory data
util: Folder containing ROS Utility scriptsROS_message_utils: Constructs a ROS2 PoseStamped message
ppo_node_entry_point.py: This module implements the entry point for the PPO nodeslam_node_entry_point.py: This module implements the entry point for the SLAM nodeyolo_node_entry_point.py: Main entry point for the YOLO noderel_pos_finder_node_entry_point.py: Entry point for the PosFinderNodecamera_node_entry_point.py: Entry point for the camera node.
object_detection: Object detection pipelinepredict: Script used for preditionyolo_training.py: Script used for Object detection training using YOLO
turtlebot3_example_code: Pre-build Turtlebot ROS program code for referenceq_learning.ipynb: Q-Learning for path planning and control
rosbag_to_csv.py: Python script to generate the data from the simulationsimulation.md: Readme to run a simulation
All the changes in the repository will be documented here.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
This is the final deliverable version
This is the midterm deliverable version
- Added data
- Preprocessing and Cleaned Data
- Deployed ML algorithms
- Generated initial results and graphs
This is the initial push of our project.
- Completed the project proposal
- Initialized the project page on Github
- Compiled a guide to run simulations