Aditya Nisal
Email : anisal@wpi.edu
Description
Aim:
To calibrate the camera by obtaining it's calibration matrix and undistort already existing images using classical Computer Vision techniques
​
Key concepts:
-
Camera Extrinsic Matrix: Transforms points from the world coordinate system to the camera coordinate system.
-
Camera Intrinsic Matrix: Transforms points from the camera coordinate system to the pixel coordinate system.
-
Homography Matrix: Relates the 2D points of a plane in the image to their corresponding 3D points in the world. Each image has its unique Homography because of the specific rotation and translation.
​
Methodology:
-
Load images
-
Define the camera calibration matrix (intrinsic parameters).
-
Define camera 1's matrix as the world pose.
-
For each pair of images:
- Load correspondence points between two images from the given text file.
-
Apply RANSAC algorithm to obtain the best set of inliers and the Fundamental Matrix (F). This process helps remove outlier correspondences.
-
Plot and show the correspondences between the two images.
-
Compute the Essential Matrix (E) from the Fundamental Matrix. The Essential Matrix represents epipolar geometry between two calibrated cameras.
-
Extract possible camera poses (rotation and translation) from the Essential Matrix.
-
Choose the correct camera pose from the extracted poses. This step ensures that therelative orientation of the cameras matches the actual physical setup.
-
Construct the projection matrix for the second camera.
-
Calculate the reprojection errors for the points using the current projection matrix. This shows how accurately the 3D points reproject to the 2D image.
- Perform Non-linear triangulation to refine the 3D point coordinates. This optimizes the 3D coordinates to best match the observed 2D points.
- Compare and Plot the difference between the 3D points obtained using linear and non-linear triangulation.
-
Store the camera poses and 3D points.
-
Use the Perspective-n-Point (PnP) method for the next set of images to find the camera pose without recomputing the entire structure.
-
For each new image:
-
Get the 2D-3D point correspondences.
-
Apply PnP RANSAC to get an initial estimate of the camera pose.
-
Refine the pose using Non-linear PnP.
- Perform triangulation to get 3D coordinates for the remaining 2D points in the image.
- Refine the 3D coordinates using non-linear triangulation.
-
Store the refined camera pose and 3D points.
-
Calculate and Store the reprojection errors.
-
Plot the camera poses and their 3D points.
-
-
Perform Bundle Adjustment to further refine camera poses and 3D points. This optimizes all camera poses and 3D points simultaneously to reduce overall reprojection error.
-
VISUALIZE the final camera poses and 3D points using helper functions.
​
​
Figure 1. Raw Matches
Figure 2. Reprojection Points
Figure 3. Linear VS Non-Linear Traingulation
Figure 4. Traingulation
Figure 6. Bundle Adjustment
Figure 5. Non-Linear PNP
Figure 7. Cheirality