This project involved designing and programming an autonomous robot using the TurtleBot3 platform and ROS (Robot Operating System) to navigate a two-lane track following Canadian traffic rules. The robot was tasked with several key challenges across three milestones: lane detection and path-following, traffic light detection and compliance, and autonomous lane-switching to avoid obstacles (construction pylons). The project emphasized camera-based vision, real-time control, ROS node communication, and autonomous path correction using sensor calibration.
Using the TurtleBot3 Autorace ROS package as a foundation, the team successfully implemented lane detection, red/green traffic light recognition, and a vision-based pylon detection system that triggered obstacle-avoidance behaviors. The robot was calibrated to detect white and yellow lanes under varied lighting conditions and to adjust its course accordingly. Although performance varied during live demos, the system demonstrated successful integration of calibration tuning, ROS-based control flow, and autonomous decision-making.
Developed a TurtleBot3-based autonomous robot capable of lane detection, traffic light response, and construction zone avoidance.
Tuned camera calibration parameters (intrinsic, extrinsic, and color thresholds) for reliable lane and traffic signal detection.
Modified ROS packages to implement real-time decision-making for red/green light response and lane change maneuvers upon detecting pylons.
Implemented obstacle avoidance using a camera-based vision system with ROS publishers/subscribers and timers to return the robot to the original lane post-obstacle.
Executed rigorous testing on maneuverability, detection distance, and control commands using ROS’s Twist and message-passing architecture.
Phase 1: Lane Detection Calibration
Calibrated the white and yellow lane colors using ROS rqt tools and the detect_lane node to ensure accurate path detection.
Validated calibration by monitoring real-time plots and testing multiple lighting scenarios to improve the stability of lane-following performance.
Phase 2: Light Detection & ROS Integration
Focused on tuning traffic light color detection parameters, adjusting HSV values to reliably detect red and green under varying angles.
Debugged issues with light reflection interfering with lane visibility and proposed parameter balancing to minimize conflicts.
Phase 3: Pylon Detection & Lane Switching
Programmed the lane-exit sequence logic for construction zones using camera detection of orange pylons and ROS messaging.
Calibrated the robot's response timing and turning behavior, ensuring it could change lanes when pylons were detected and return afterward using a timer-based sequence.
Debugged the robot’s ability to re-enter its lane post-obstacle by tuning the ROS control_lane and detect_lane interaction.