I. The programme which I draft is correct. Yet it's not sufficient to deal with case in '2D-to-3D space' mapping. Simply saying:
1. Reason:
- The assigned points is set on the object. However it is actually considered as a point in back of the object, which should be the position on the table. Seems incorrect, but actually the mapping is correct... just the algorithm is insufficient enough to recognize where is the assigned point between the 2D frame and 3D frame.
2. Example:
3. Making modification: adapting EPnP method: Paper
Friday, 30 May 2014
Thursday, 29 May 2014
Wednesday, 28 May 2014
Estimate Homography to do 2D to 3D integration (16)
I. Testing sequence:
II. Testing results in tform:
1.
2.
3.
4.
III. Issues need to be modified:
Because using 'bwlabel()' this function, a clear boundary between the red circle and the black background can be an important factor of how well 'bwlabel()' works. While using current marker may lead to some problem just like the following:
II. Testing results in tform:
1.
2.
3.
4.
III. Issues need to be modified:
Because using 'bwlabel()' this function, a clear boundary between the red circle and the black background can be an important factor of how well 'bwlabel()' works. While using current marker may lead to some problem just like the following:
In order to make sure the black background is big enough for pointing out those red circles, I decide to make the following modification:
Then this way is capable to cope with the issue of unclear boundary:
Tuesday, 27 May 2014
Estimate Homography to do 2D to 3D integration (15)
I. Maintaining the programme:
- Errors after applying video sequence into the original codes
- Dissimilar setup in thresholding part of code across a whole video
- Pop out experimental results in a video if possible
- Errors after applying video sequence into the original codes
- Dissimilar setup in thresholding part of code across a whole video
- Pop out experimental results in a video if possible
Saturday, 24 May 2014
Estimate Homography to do 2D to 3D integration (14)
I. Studying how to programming 'mouse clicking function' in MATLAB:
- In Chinese - 1
- In Chinese - 2
II. Results:
- A GUI for capturing mouse click event
- Be able to load a video stream for performing experiment
- Assuming 30 frames per second. User is capable to locate the 4 fixed points and multiple moving points in a frame extracted from these 30 frames.
- In Chinese - 1
- In Chinese - 2
II. Results:
- A GUI for capturing mouse click event
- Be able to load a video stream for performing experiment
- Assuming 30 frames per second. User is capable to locate the 4 fixed points and multiple moving points in a frame extracted from these 30 frames.
- Here is a demo of the GUI (... because of the monitor resolution can not be recognize by the recording software, just ignore the incorrect position of the pointer)
Friday, 23 May 2014
Estimate Homography to do 2D to 3D integration (13)
I. Studying for capturing camera intrinsic and extrinsic information:
- Camera Calibration Toolbox for MATLAB (...caltech, add-ons)
- Tutorial:
1. Example for explaining the process in Camera Calibration
2. Doing our own calibration
- In Chinese (main): applying the existing function in MATLAB for camera calibration
- In Chinese (...in detail)
- In English: Mathworks Tutorial in Video
II. Made our checkerboard:
- Camera Calibration Toolbox for MATLAB (...caltech, add-ons)
- Tutorial:
1. Example for explaining the process in Camera Calibration
2. Doing our own calibration
- In Chinese (main): applying the existing function in MATLAB for camera calibration
- In Chinese (...in detail)
- In English: Mathworks Tutorial in Video
II. Made our checkerboard:
Thursday, 22 May 2014
Estimate Homography to do 2D to 3D integration (12)
I. Collecting video streams for experiments
II. Reading video streams in MATLAB
- mathworks - Image Processing Toolbox
- In Chinese - 1
- In Chinese - 2
III. Studying how to programming 'mouse clicking function' in MATLAB
II. Reading video streams in MATLAB
- mathworks - Image Processing Toolbox
- In Chinese - 1
- In Chinese - 2
III. Studying how to programming 'mouse clicking function' in MATLAB
Wednesday, 21 May 2014
Summary of Weekly Meeting - (11)
I. Target on the following area:
- Taking videos as record of serial viewpoint change (... MATLAB function)
- Checking whether the height of one object from the plane making the inaccurate results
- GUI: providing mouse clicking on each video frame for locate points to compute either transformation function or a set of given points for mapping
* The coin is almost sit on the plane without not significant height from the desk plane, which is quite reliable under transformation in our proposed method. Treat it as a robust and reference point for checking each experimental result.
II. Way to measure error of mapping position:
- Taking videos as record of serial viewpoint change (... MATLAB function)
- Checking whether the height of one object from the plane making the inaccurate results
- GUI: providing mouse clicking on each video frame for locate points to compute either transformation function or a set of given points for mapping
* The coin is almost sit on the plane without not significant height from the desk plane, which is quite reliable under transformation in our proposed method. Treat it as a robust and reference point for checking each experimental result.
II. Way to measure error of mapping position:
![]() |
Note from George |
- Assume the distance from hand-held camera to target is about 1 meter
- The acceptable range of viewpoint is 1 degree from the centre to the right, and so on
- Using tangent to computer the distance of the offset
- Translating the value of offset from meter to pixels based on the image resolution
Estimate Homography to do 2D to 3D integration (11)
I. Preparing meeting
2. Experimental results 1: different hh-img transform points back into the Kinect frame
3. Experimental results 2: different hh-img transform points back into the themselves
- This is for checking the correction of the given form of projective transformation
- This is for checking whether scale, height of object from the plane, resolution are issues for those offsets in previous results.
II. Reuslts:
1. Modification of my metric:
Changing the of type of color marker, using some morphology operations, considering a function eclipse fitting, adapting projection transformation instead of using affine one |
Check the central position in either Kinect frame and hand-held camera image in red circles... they should be right on the centre of the circle |
Check the error of same central points in the hand-held camera image: substract the position before and after 'Projective transformation' |
2. Experimental results 1: different hh-img transform points back into the Kinect frame
Left column: assign points; Middle column: using affine transformation + average metric of 'tform()'; Right column: using projective transformation (our proposed method) |
3. Experimental results 2: different hh-img transform points back into the themselves
- This is for checking the correction of the given form of projective transformation
Left column: assign points; Middle column: using affine transformation + average metric of 'tform()'; Right column: using projective transformation (our proposed method) |
4. Experimental results 3: different hh-img transform points back into the same hh-img 1
- This is for checking whether scale, height of object from the plane, resolution are issues for those offsets in previous results.
Left column: assign points; Middle column: using affine transformation + average metric of 'tform()'; Right column: using projective transformation (our proposed method) |
Subscribe to:
Posts (Atom)