top of page

Project:

Visual Servoing​

Skills Involved:

ROS2

C++

Python

OpenCv

​

Team Project - 2 members

My role: Formulating algorithm and writing logical code

​

Code

Description

Objectives:

The main aim of this project is to Perform visual serving and bring the robot arm back to its original position using a control loop and computer vision functions.

​

Methodology:

For the robot and the block to spawn, the following lines were needed to run:

  1. ros2 launch rrbot_gazebo rrbot_world.launch.py : This is a ros launch file. The executable python file - rrbot_world.launch.py contains the files and packages that this launch file makes run. This launch file when run, spawns a 2R joint robot, thus called R_R bot.

  2. ros2 launch rrbot_description object_spawn.launch.py : Similar to above, this as well as a launch file. However, this executable python launch file - object_spawn.launch.py spawns a plate that contains our circles of a different color.

  3. ros2 rviz 2 rviz2 : Rviz is a tool that is used to view the camera captures in a gazebo environment. Here it captures the circles and helps detect their centers which serve as the reference for us to get the positions.

  4. ros2 run image_view image_view -r image:=camera1/image_raw : This command runs the image view executable. This executable subscribes to a topic. Here, it subscribes to the remapped topic, camera1/output_image.

  5. ros2 run opencv_test vis_srv: This is the node where the entire control loop exists. Thus, this file is responsible for initially publishing the new position of the robot arm, then switching from the position to the velocity controller, and then bringing the robot to its desired position again so as to complete the visual serving algorithm.

  6. ros2 run image_view image_view -r image:=output_image - Again using the image_view executable but this time remapping the topic to which it subscribes to output_image.

​

What happens in the code :

  • First, an image is received from the camera1/image_view topic.

  • Initially, the image gets converted to a ros2 type message.

  • New matrices are created for storing the thresholded values of the respective colors. This is done by using the inRange function in the HSV domain instead of RGB domain.

  • After tuning the inRange function parameters, we average out the points in the circle and find the center of the circles. These values are stored and the robot arm is moved to a different position.

  • Then the same previous steps are followed to find the centers of the new position of the block with respect to the camera frame. Then, both the final and initial reference positions(the center of circles initial and final/desired are subtracted and the error is calculated.

  • While reducing the error, some velocity is given to the robot arm which leads the arm to go to its desired position.

​

​

 

 

 

 

 

 

 

​

 

 

 

 

 

​

Visual Servoing

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Trajectories of the colors on the plate from initial to the reference position

​

Figure_1.png
bottom of page