Tuesday, 17 June 2014

Estimate Homography to do 2D to 3D integration (28): Successfully accessing EPnP algorithm based on Kinect2handHeld camera between 3D and 2D points

I. Mapping results between '640*480 Kinect frame' and '4608*3456 HH_camera frame'

   1. Using the Intrinsic Parameter of Kinect camera:




   2. Using the Intrinsic Parameter of HH camera: 



II. Mapping results between '640*480 Kinect frame' and '640*480 hh_camera frame'

   1. Using the Intrinsic Parameter of Kinect camera:




   2. Using the Intrinsic Parameter of HH camera : 




III. Some points can be discussed furthermore...

      1. How to explain the results in "640*480 hh frame case"... what's the character of the 2 intrinsic parameters to do with the testing processes?

      2. Whether the different size in the Kinect frame as well as the HH_frame leads to not precise mapping results or not?
 
      3. More far distance from Object to the Camera makes the mapping not so precise?

      4. The Xc/Xc_G output of EPnP locates on "Camera Coordinate System", which is totally opposite in x-axis and y-axis comparing to its "Projection Coordinate System, i.e., Image plane"... because of this is a "Pinhole Camera"
           - Reference in WIKI
           - Reference in Chinese
           - A Master Thesis in Taiwan NCU

IV. Things to note...

      1. Even the Intrinsic parameter generated from the same camera, it's not correct to assign the values to unequal size of image... for example, using  "4608*3456 size of image" then this is only suitable for the testing images which are in the same size of image... i.e., which is not capable to be applied to the ones of "640*480"

      2. The output value  "Xc/Xc_G" provided by EPnP can be directly used, there is no need to press "minus one (-1)"

      3. The output value  "Xc/Xc_G" provided by EPnP need to transform into "Projection Coordinate System" so that the position of points can be shown on image correctly

No comments:

Post a Comment