I. Finish:
- Chapter 3: Proposed method
- around 8400 words, 61 pages
II. Start writing:
- Chapter 4: Experimental results
- Have plans and structure about how to write this section
Thursday, 21 August 2014
Tuesday, 19 August 2014
Draft Master Thesis (7) & (8)
* This is the memo of 18/08/2014 to today.
I. The parts of writing which is done as following:
- Around 60% of Chapter 3 (Proposed Method) is done.
- Modifying and adding more paragraphs in Chapter 2 (Introduction and literature review)
- Inserting relevant pictures in Chapter 2 and Chapter 3
- Collecting pictures to help to illustrate all ideas in Chapter 2 and Chapter 3
- 38 pages, 5077 words in total now
- 38 pages, 5077 words in total now
II. Planning about how to organize Chapter 4 (Experimental Results) and Chapter 5 (discussion)
Sunday, 17 August 2014
Estimate Homography to do 2D to 3D integration (21): Extract known points in pairs automatically
I. Generating another experimental results of zoom, tilt, viewpoint_change and rotation by using webcam:
- This results will be a demonstration to illustrate following 4 points:
1. How are we doing and validating our proposed system, before adapting the eye-tracker
2. Why we try to use BW segmentation to improve the result of Particle Filtering in next step
3. Why we would like to increase the size of color markers
4. What's the differences and issues while we choose the eye-tracker instead of using the webcam
- The videos, graphs, precision, and accuracy can be downloaded via: https://db.tt/kSwzkZ6e
II. Debugging and editing my code in doing "3D Rendering":
- Dealing with the problem that my proposed method is not able to show fixations in World Coordinate without overlap across frames, while the system is automatically running
- Changing the color and type of marker in order to make the markers become more clear
- The updating videos, graphs can be downloaded via: https://db.tt/eUmPKGKH
(... the original results can be found via: https://db.tt/2sRENfGC)
- This results will be a demonstration to illustrate following 4 points:
1. How are we doing and validating our proposed system, before adapting the eye-tracker
2. Why we try to use BW segmentation to improve the result of Particle Filtering in next step
3. Why we would like to increase the size of color markers
4. What's the differences and issues while we choose the eye-tracker instead of using the webcam
- The videos, graphs, precision, and accuracy can be downloaded via: https://db.tt/kSwzkZ6e
II. Debugging and editing my code in doing "3D Rendering":
- Dealing with the problem that my proposed method is not able to show fixations in World Coordinate without overlap across frames, while the system is automatically running
- Changing the color and type of marker in order to make the markers become more clear
- The updating videos, graphs can be downloaded via: https://db.tt/eUmPKGKH
(... the original results can be found via: https://db.tt/2sRENfGC)
Friday, 15 August 2014
Estimate Homography to do 2D to 3D integration (20): Extract known points in pairs automatically
I. The results of 3D rendering:
* Haven't give a function to let user play around with these 2 demos. Currently, users only can watch these demo, after I collect the extracted frames and combine together into video streams.
* The Viewing Angle is set by me... each demo chooses 3 viewing angles
* Each video shows the view from Kinect camera in the World Coordinate System, also, shows the 4 fixations by using "yellow cross (ground truth)" and "red cross (the result of our proposed method)"
* Here is the download link in order to see more clear demos... shows those 8 crosses: 3D rendering videos
- Demo 1:
- Demo 2:
* Haven't give a function to let user play around with these 2 demos. Currently, users only can watch these demo, after I collect the extracted frames and combine together into video streams.
* The Viewing Angle is set by me... each demo chooses 3 viewing angles
* Each video shows the view from Kinect camera in the World Coordinate System, also, shows the 4 fixations by using "yellow cross (ground truth)" and "red cross (the result of our proposed method)"
* Here is the download link in order to see more clear demos... shows those 8 crosses: 3D rendering videos
- Demo 1:
Draft Master Thesis (6)
I. Focus on writing the "Implementation" part, which will mention...:
- Introducing all components of our proposed system
- Describing the way to implement
- Collecting support images and graphs
- Gathering all materials used in my implementation while I work on it
- Surveying related reference, website, toolbox
II. Use sometime to write a bit of "Results" part:
- Introducing all components of our proposed system
- Describing the way to implement
- Collecting support images and graphs
- Gathering all materials used in my implementation while I work on it
- Surveying related reference, website, toolbox
II. Use sometime to write a bit of "Results" part:
Thursday, 14 August 2014
Estimate Homography to do 2D to 3D integration (19): Extract known points in pairs automatically
I. Studying for "how to make 3D rendering":
- Coding
- Find suitable source as a demonstration
II. Modifying the graphs of (x, y, t), (X, Y, Z):
- (x, y, y)
- (X, Y, Z)
- Coding
- Find suitable source as a demonstration
II. Modifying the graphs of (x, y, t), (X, Y, Z):
- (x, y, y)
Wednesday, 13 August 2014
Summary of Weekly Meeting - (20)
I. Make a demo of 3D rendering:
- This demo should look like this: https://www.youtube.com/watch?v=r26fmr0UpBs
II. Possible additional experiments to do in order to expand the content of the chapter of "experiment results":
1. Comparing the results of proposed system via automatically locate fixation points and manually mark those fixation points
2. Discussion the goodness parts and the illness parts
3. Do some validation of each component... (minor)
III. Need to well-organized the results of precision, accuracy, graphs and extracted frames:
- Shows title, unit in the graphs of (x,y,t) and (x,y,z)
- Makes clear table to demonstrate the statistics of precision and accuracy
- This demo should look like this: https://www.youtube.com/watch?v=r26fmr0UpBs
II. Possible additional experiments to do in order to expand the content of the chapter of "experiment results":
1. Comparing the results of proposed system via automatically locate fixation points and manually mark those fixation points
2. Discussion the goodness parts and the illness parts
3. Do some validation of each component... (minor)
III. Need to well-organized the results of precision, accuracy, graphs and extracted frames:
- Shows title, unit in the graphs of (x,y,t) and (x,y,z)
- Makes clear table to demonstrate the statistics of precision and accuracy
Tuesday, 12 August 2014
Draft Master Thesis (5)
I. Finish three chapters:
- Acknowledgement
- Introduction
- Literature review (90%)
II. Other parts also have contents:
- Bibliography
- Figures related to the whole thesis
- titles of each chapter
- Acknowledgement
- Introduction
- Literature review (90%)
II. Other parts also have contents:
- Bibliography
- Figures related to the whole thesis
- titles of each chapter
Monday, 11 August 2014
Draft Master Thesis (4)
I. Make the preliminary decision of...:
- How many chapters to write?
- What's these chapters?
- Make a plan about what are the additional experiments to show in the thesis (...expect precision, accuracy, and extracted frames)
- How many chapters to write?
- What's these chapters?
- Make a plan about what are the additional experiments to show in the thesis (...expect precision, accuracy, and extracted frames)
Friday, 8 August 2014
Draft Master Thesis (3)
I. Collecting experimental results
II. Writing the part of components of our proposed method
II. Writing the part of components of our proposed method
Estimate Homography to do 2D to 3D integration (18): Extract known points in pairs automatically
I. Writing codes to draw graphs of (x, y, t) and (X, Y, Z)
* Here, only shows the results of red color marker
* In each video type, 1st image shows (x, y, t), 2nd image shows (X, Y, Z)
- x, y: coordinates in 2D (... the position located on a Kinect frame)
- t: frame no.
- X, Y, Z: coordinates in 3D (... the position located in the World)
- Zoom:
- Viewpoint_change:
- Tilt:
- Real case 1:
- Real case 2:
* Here, only shows the results of red color marker
* In each video type, 1st image shows (x, y, t), 2nd image shows (X, Y, Z)
- x, y: coordinates in 2D (... the position located on a Kinect frame)
- t: frame no.
- X, Y, Z: coordinates in 3D (... the position located in the World)
- Zoom:
- Viewpoint_change:
- Tilt:
- Real case 1:
- Real case 2:
Thursday, 7 August 2014
Estimate Homography to do 2D to 3D integration (17): Extract known points in pairs automatically
I. Draft code for compute "Precision and Accuracy" of these 5 videos of demo:
[ Precision Part ]
* There are 8 rows in the picture marked "Precision". Each of them are corresponding to 1 color marker
* The order of these color markers are: yellow, red, blue, green
* Precision_2D_PPAF is related to the precision in 2D; Precision_3D_PPAF is related to 3D
[ Accuracy Part ]
* There are 4 rows of 2 "ans" in the picture marked "Accuracy". Each of them are corresponding to 1 color marker
* The order of these color markers are: yellow, red, blue, green
* nframes: total frames in that video which are used for experiments (including the 1st initial frames, but in the following computation, the 1st frame is not included)
* The information of accuracy_2D includes the following stuffs:
- 1 column: offset <= 3 pixels in 2D, after executing "2D-to-3D Re-projection" computation
- 2 column: offset <= 5 pixels
- 3 column: offset <= 8 pixels
- 4 column: offset <= 10 pixels
- 5 column: offset <= 15 pixelsi
- 6 column: offset <= 20 pixels
- 7 column: offset > 20 pixels
- Zoom:
- Viewpoint_change:
- Tilt:
- Real case 2:
- Real case 3:
[ Precision Part ]
* There are 8 rows in the picture marked "Precision". Each of them are corresponding to 1 color marker
* The order of these color markers are: yellow, red, blue, green
* Precision_2D_PPAF is related to the precision in 2D; Precision_3D_PPAF is related to 3D
[ Accuracy Part ]
* There are 4 rows of 2 "ans" in the picture marked "Accuracy". Each of them are corresponding to 1 color marker
* The order of these color markers are: yellow, red, blue, green
* nframes: total frames in that video which are used for experiments (including the 1st initial frames, but in the following computation, the 1st frame is not included)
* The information of accuracy_2D includes the following stuffs:
- 1 column: offset <= 3 pixels in 2D, after executing "2D-to-3D Re-projection" computation
- 2 column: offset <= 5 pixels
- 3 column: offset <= 8 pixels
- 4 column: offset <= 10 pixels
- 5 column: offset <= 15 pixelsi
- 6 column: offset <= 20 pixels
- 7 column: offset > 20 pixels
- Zoom:
Precision: 1st 4 values of ans are in 2D, 2nd 4 values of ans are in 3D |
Accuracy: offset of 4 color markers [ <=3 pixels | <=5 pixels | <=8 pixels | <=10 pixels | <=15 pixels | <=20 pixels | >20 pixels ] |
- Viewpoint_change:
Precision: 1st 4 values of ans are in 2D, 2nd 4 values of ans are in 3D |
Accuracy: offset of 4 color markers [ <=3 pixels | <=5 pixels | <=8 pixels | <=10 pixels | <=15 pixels | <=20 pixels | >20 pixels ] |
- Tilt:
Precision: 1st 4 values of ans are in 2D, 2nd 4 values of ans are in 3D |
Accuracy: offset of 4 color markers [ <=3 pixels | <=5 pixels | <=8 pixels | <=10 pixels | <=15 pixels | <=20 pixels | >20 pixels ] |
- Real case 2:
Precision: 1st 4 values of ans are in 2D, 2nd 4 values of ans are in 3D |
Accuracy: offset of 4 color markers [ <=3 pixels | <=5 pixels | <=8 pixels | <=10 pixels | <=15 pixels | <=20 pixels | >20 pixels ] |
- Real case 3:
Precision: 1st 4 values of ans are in 2D, 2nd 4 values of ans are in 3D |
Accuracy: offset of 4 color markers [ <=3 pixels | <=5 pixels | <=8 pixels | <=10 pixels | <=15 pixels | <=20 pixels | >20 pixels ] |
Wednesday, 6 August 2014
Estimate Homography to do 2D to 3D integration (16): Extract known points in pairs automatically
I. Extract frames from recorded videos:
- These are for final presentation
- zoom:
- viewpoint_change:
- tilt:
- real case 1:
- real case 2:
- These are for final presentation
- zoom:
- viewpoint_change:
- tilt:
- real case 1:
- real case 2:
Tuesday, 5 August 2014
Estimate Homography to do 2D to 3D integration (15): Extract known points in pairs automatically
I. Part of Demo of "real-case"
II. Need to assign the possession points onto these 2 real-case videos, or manually mark these possession points via my GUI
- Info of possession points of real-case-1:
- Info of possession points of real-case-2:
II. Need to assign the possession points onto these 2 real-case videos, or manually mark these possession points via my GUI
- Info of possession points of real-case-1:
- Info of possession points of real-case-2:
Monday, 4 August 2014
Estimate Homography to do 2D to 3D integration (14): Extract known points in pairs automatically
I. Find the limitation of my proposed system by collecting real-cases for demo:
- The brightness of the environment causes unstable result of color recognition from Particle Filter Method
- Not powerful enough to do "look at the color markers, and look to other place, then look to the color markers again" process: not as stable as using the Webcam (...Eye-tracker shows lower brightness of color comparing to Webcam in the extracted video frames)
II. The frame rate of Webcam is not a constant value; instead of using a Webcam, adapting Eye-Tracker to do every experiments...
- Figure out how to make two videos (one from Kinect, another from eye-tracker) can play simultaneously: Frame Rate is the hint
- Kinect: 30 FPS, Eye-Tracker: 24FPS
- Here is the way to do:
III. Part of experimental results for final demo
- Zoom, Tilt, Viewpoint_Change
- The brightness of the environment causes unstable result of color recognition from Particle Filter Method
- Not powerful enough to do "look at the color markers, and look to other place, then look to the color markers again" process: not as stable as using the Webcam (...Eye-tracker shows lower brightness of color comparing to Webcam in the extracted video frames)
II. The frame rate of Webcam is not a constant value; instead of using a Webcam, adapting Eye-Tracker to do every experiments...
- Figure out how to make two videos (one from Kinect, another from eye-tracker) can play simultaneously: Frame Rate is the hint
- Kinect: 30 FPS, Eye-Tracker: 24FPS
- Here is the way to do:
III. Part of experimental results for final demo
- Zoom, Tilt, Viewpoint_Change
Friday, 1 August 2014
Draft Master Thesis (2)
I. Survey relevant references:
- Particle Filter
- EPnP
- Ray Tracing
- Camera Calibration
- Similar work to mine
II. Modified previous "document of literature review"
III. Writing Thesis
- Particle Filter
- EPnP
- Ray Tracing
- Camera Calibration
- Similar work to mine
II. Modified previous "document of literature review"
III. Writing Thesis
Thursday, 31 July 2014
Estimate Homography to do 2D to 3D integration (13): Extract known points in pairs automatically
I. Maintain my code, and solve the problem that I can't randomly read video frames from a video:
- Save the video into a Matfile
II. Collecting testing videos
- Zoom, Tilt, Viewpoint_change are ok
- Real Case Demo isn't not fine, because of unstable result of capturing color markers as known points
- Save the video into a Matfile
II. Collecting testing videos
- Zoom, Tilt, Viewpoint_change are ok
- Real Case Demo isn't not fine, because of unstable result of capturing color markers as known points
Wednesday, 30 July 2014
Estimate Homography to do 2D to 3D integration (12): Extract known points in pairs automatically
I. Introduce the reference toolbox finding yesterday into my current system
- To combine the process of computing "transformation from Projection coordinate to World coordinate" all in MATLAB
- Continuously recording a video and compute the transformation mentioned above in every frame
- Need to check whether or not the "value of depth" is same as what shows in C++ version
II. Complete the modification mentioned above:
- Do verifications in 3 cases (zoom, tilt, viewpoint_change)
- Also, increase the number of known points (...as initial pair of points for EPnP) from 12 to 36, which means there are 9 points located in each color marker
- The result is improve a lot (...should be caused by the increasing number of initial pair of points as well as more accurate segmentation results): 1 to 2 pixel errors in Zoom and Tilt; 1 to 3 pixel errors in Viewpoint_change
- To combine the process of computing "transformation from Projection coordinate to World coordinate" all in MATLAB
- Continuously recording a video and compute the transformation mentioned above in every frame
- Need to check whether or not the "value of depth" is same as what shows in C++ version
II. Complete the modification mentioned above:
- Do verifications in 3 cases (zoom, tilt, viewpoint_change)
- Also, increase the number of known points (...as initial pair of points for EPnP) from 12 to 36, which means there are 9 points located in each color marker
- The result is improve a lot (...should be caused by the increasing number of initial pair of points as well as more accurate segmentation results): 1 to 2 pixel errors in Zoom and Tilt; 1 to 3 pixel errors in Viewpoint_change
Tuesday, 29 July 2014
Estimate Homography to do 2D to 3D integration (11): Extract known points in pairs automatically
I. Re-install OpenNI1:
- Would like to make my whole processes working in MATLAB
- Need to continuously recording a video from Kinect... my previous framework only output one image from Kinect via OpenNI2 in C++
- In either 32-bit or 64-bit version, only the following version of files can trigger Kinect:
* openni-win64-1.5.4.0-dev / openni-win32-1.5.4.0-dev
* SensorKinect093-Bin-Win64-v5.1.2.1 / SensorKinect093-Bin-Win32-v5.1.2.1
* nite-win64-1.5.2.21-dev / nite-win32-1.5.2.21-dev
- In OpenNI1, remember to edit the XML files in OpenNI as well as NiTE folders
II. Reference to do the transformation "Projective coordinate to World coordinate" in MATLAB
- Kinect MATLAB
- This toolbox can deal with the issue, but only OpenNI1 provide these functions
- Would like to make my whole processes working in MATLAB
- Need to continuously recording a video from Kinect... my previous framework only output one image from Kinect via OpenNI2 in C++
- In either 32-bit or 64-bit version, only the following version of files can trigger Kinect:
* openni-win64-1.5.4.0-dev / openni-win32-1.5.4.0-dev
* SensorKinect093-Bin-Win64-v5.1.2.1 / SensorKinect093-Bin-Win32-v5.1.2.1
* nite-win64-1.5.2.21-dev / nite-win32-1.5.2.21-dev
- In OpenNI1, remember to edit the XML files in OpenNI as well as NiTE folders
Successful set up the environment of OpenNI1 |
II. Reference to do the transformation "Projective coordinate to World coordinate" in MATLAB
- Kinect MATLAB
- This toolbox can deal with the issue, but only OpenNI1 provide these functions
![]() |
Successful apply this reference toolbox in MATLAB |
Monday, 28 July 2014
Estimate Homography to do 2D to 3D integration (10): Extract known points in pairs automatically
I. Re-making color markers because of the unstable result in last week
- Modified the size: 150x150 pixels in circle
- Choose recognizable colors via eye-tracker (...which has lower brightness):
yellow (255, 255, 0);
red (255, 0, 0);
light_blue (0; 255, 255);
green (0, 255, 0)
II. Testing videos
1. Simple cases (zoom, tilt, viewpoint_change)
2. Real cases (only show 3 of 4)
III. Experimental results
1. Simple cases:
- In zoom and tile cases... 1 to 3 pixels in average, 5 to 8 pixels is the worst
- In viewpoint_change case... 1 to 5 pixels in average, 8 to 12 pixels is the worst
IV. Real-time demo
- Now, the whole system can execute by directly reading video from a USB Webcam
- Reading recording videos to run the whole system is not the only choice
- Issue: delay time while running... because of the massive computation in our proposed system (EPnP -> Ray_tracing)
- The issue is not the major issue currently, but can try to improve it
- Modified the size: 150x150 pixels in circle
- Choose recognizable colors via eye-tracker (...which has lower brightness):
yellow (255, 255, 0);
red (255, 0, 0);
light_blue (0; 255, 255);
green (0, 255, 0)
II. Testing videos
1. Simple cases (zoom, tilt, viewpoint_change)
2. Real cases (only show 3 of 4)
III. Experimental results
1. Simple cases:
- In zoom and tile cases... 1 to 3 pixels in average, 5 to 8 pixels is the worst
- In viewpoint_change case... 1 to 5 pixels in average, 8 to 12 pixels is the worst
- Now, the whole system can execute by directly reading video from a USB Webcam
- Reading recording videos to run the whole system is not the only choice
- Issue: delay time while running... because of the massive computation in our proposed system (EPnP -> Ray_tracing)
- The issue is not the major issue currently, but can try to improve it
Saturday, 26 July 2014
Estimate Homography to do 2D to 3D integration (9): Extract known points in pairs automatically
I. Using bigger color markers
II. Draft code
- Using "the position of seeds generating from Particle Filter" as hints
- Do "morphological operation" + "the hints" to segment these color markers more accurately
- find the central position of points in each marker as known points to EPnP
III. Experimental results
- in Zoom, tilt, viewpoint_change cases, the error of pixels in (x,y) coordinates are 10 to 12 pixels in average
- The error is higher than using those smaller markers previously (...3 to 5 pixel error in each frame)
IV. Recording a video to simulate the real working case
- Has the information of possession point across frames
- Video:
V. Discussion with supervisor and report the preliminary results
- 10 pixel error in my case (640x480 in resolution) is acceptable, but still can be improved
- My 3rd experiment is not limited in scenario, just try to make a real work which can demonstrate our proposed framework works fine... especially with eye-tracker
II. Draft code
- Using "the position of seeds generating from Particle Filter" as hints
- Do "morphological operation" + "the hints" to segment these color markers more accurately
- find the central position of points in each marker as known points to EPnP
III. Experimental results
- in Zoom, tilt, viewpoint_change cases, the error of pixels in (x,y) coordinates are 10 to 12 pixels in average
- The error is higher than using those smaller markers previously (...3 to 5 pixel error in each frame)
IV. Recording a video to simulate the real working case
- Has the information of possession point across frames
- Video:
V. Discussion with supervisor and report the preliminary results
- 10 pixel error in my case (640x480 in resolution) is acceptable, but still can be improved
- My 3rd experiment is not limited in scenario, just try to make a real work which can demonstrate our proposed framework works fine... especially with eye-tracker
Friday, 25 July 2014
Estimate Homography to do 2D to 3D integration (8): Extract known points in pairs automatically
I. eye-tracker calibration to get the extrinsic parameter
- some images are inappropriate in this processing, need to re-generate.
- result:
II. Pre-testing the videos recorded by eye-tracker
- The brightness is lower than using the webcam, which make the color-based segmentation work incorrect.
- The small size of color markers need to be increased so that the viewing distance from eye-tracker to the object is far enough... fit the needs in real life
- some images are inappropriate in this processing, need to re-generate.
- result:
II. Pre-testing the videos recorded by eye-tracker
- The brightness is lower than using the webcam, which make the color-based segmentation work incorrect.
- The small size of color markers need to be increased so that the viewing distance from eye-tracker to the object is far enough... fit the needs in real life
Thursday, 24 July 2014
Draft Master Thesis (1)
I. Skimming one of the thesis from the past student in order to get more ideas about how to plan mine
II. Read some relevant references
II. Read some relevant references
Wednesday, 23 July 2014
Summary of Weekly Meeting - (19)
I. Summary
- What experiments are going to show in the end
(a) using chessboard to show the proposed method is robust (manually case)
(b) using color markers to show the proposed method is acceptable (more automatic case)
(c) using eye-tracker to make a demo... with some real accessories in surgery
- Ways to demonstrate results
(a) re-projection coordinates from 2D_Img onto 2D_Kinect_frame (in video)
(b) (x,y,t) graph related to re-projection coordinates on 2D_Kinect_frames
(c) (x,y,z) graph related to re-projection coordinates on 3D_world_cooridnate_system
- Next work
(a) using eye-tracker to substitute the Webcam
(b) tracking a colorful stuff via eye-tracker (...generate the possession points across time)
(c) show the possession points onto 2D_Kinect_frame (...based my framework)
II. Generating videos from eye-tracker in order to do camera calibration (...via calibration toolbox)
- What experiments are going to show in the end
(a) using chessboard to show the proposed method is robust (manually case)
(b) using color markers to show the proposed method is acceptable (more automatic case)
(c) using eye-tracker to make a demo... with some real accessories in surgery
- Ways to demonstrate results
(a) re-projection coordinates from 2D_Img onto 2D_Kinect_frame (in video)
(b) (x,y,t) graph related to re-projection coordinates on 2D_Kinect_frames
(c) (x,y,z) graph related to re-projection coordinates on 3D_world_cooridnate_system
- Next work
(a) using eye-tracker to substitute the Webcam
(b) tracking a colorful stuff via eye-tracker (...generate the possession points across time)
(c) show the possession points onto 2D_Kinect_frame (...based my framework)
II. Generating videos from eye-tracker in order to do camera calibration (...via calibration toolbox)
Tuesday, 22 July 2014
Estimate Homography to do 2D to 3D integration (7): Extract known points in pairs automatically
I. Results:
- Modification
(a) decrease the size of color markers into 20x20 pixels
(b) increase the number of known points from 3 to 6 (...as the input for EPnP alg)
- Modification
(a) decrease the size of color markers into 20x20 pixels
(b) increase the number of known points from 3 to 6 (...as the input for EPnP alg)
Friday, 18 July 2014
Estimate Homography to do 2D to 3D integration (5): Extract known points in pairs automatically
I. Making 2 experimental setups to solve the inaccurate issue in my current framework:
For each setup:
- 1 Kinect frame
- 4 Webcam videos including zoom, rotation,tilt, viewpoint_change
1. Decrease the size of color markers
2. Pre-setup other markers located at the central of the original markers
For each setup:
- 1 Kinect frame
- 4 Webcam videos including zoom, rotation,tilt, viewpoint_change
1. Decrease the size of color markers
![]() | |
Kinect frame |
2. Pre-setup other markers located at the central of the original markers
![]() |
Kinect frame |
Thursday, 17 July 2014
Summary of Weekly Meeting - (18)
I. Summary:
- Using Color-based Particle Filter mechanism to automatically locate the color markers (... with a real-time verification)
- Get 50% to 60% correction in "2D to 3D" work
- To improve the accuracy, here are possible ways to think:
1. decrease the size of color markers
2. increase the number of known points, which are extracted from the color markers
3. treat the information from Particle Filter as the hints; then, to make a better segmentation of color markers in Images
- May have chance to try wearable eye-tracker
- In terms of Master Thesis:
1. Inquire one thesis as reference from the students studying in here in the past
2. Start to think how to make experiments which is going to show in this thesis
- Using Color-based Particle Filter mechanism to automatically locate the color markers (... with a real-time verification)
- Get 50% to 60% correction in "2D to 3D" work
- To improve the accuracy, here are possible ways to think:
1. decrease the size of color markers
2. increase the number of known points, which are extracted from the color markers
3. treat the information from Particle Filter as the hints; then, to make a better segmentation of color markers in Images
- May have chance to try wearable eye-tracker
- In terms of Master Thesis:
1. Inquire one thesis as reference from the students studying in here in the past
2. Start to think how to make experiments which is going to show in this thesis
Tuesday, 15 July 2014
Estimate Homography to do 2D to 3D integration (4): Extract known points in pairs automatically
I. Preliminary results in using Particle Filter to locate color markers automatically:
- including zoom, tilt, rotation, viewpoint_change
II. Use the automatic method in my current framework... do see how it works in the process of "2D to 3D"
- Find the process is not accurate and stable enough because of the following reasons:
1. not accurate locate the central of the circular color markers
2. Webcam moves to fast
- including zoom, tilt, rotation, viewpoint_change
II. Use the automatic method in my current framework... do see how it works in the process of "2D to 3D"
- Find the process is not accurate and stable enough because of the following reasons:
1. not accurate locate the central of the circular color markers
2. Webcam moves to fast
Monday, 14 July 2014
Estimate Homography to do 2D to 3D integration (3): Extract known points in pairs automatically
I. Discuss with experienced people in order to inquire solutions in how to locate my color markers automatically.
II. One of the hint:
- particle filter + color
- particle filter tutorial
III. Read the code and documents, then do some modification in order to fit my code and needs.
II. One of the hint:
- particle filter + color
- particle filter tutorial
III. Read the code and documents, then do some modification in order to fit my code and needs.
Saturday, 12 July 2014
Estimate Homography to do 2D to 3D integration (2): Extract known points in pairs automatically
I. Keep draft the code for extract 4 color markers without initial points...
Thursday, 10 July 2014
Estimate Homography to do 2D to 3D integration (1): Extract known points in pairs automatically
I. Collecting data from Kinect and Webcam for verification implementing method
II. Draft code:
- focus on the color marker cases
II. Draft code:
- focus on the color marker cases
Wednesday, 9 July 2014
Summary of Weekly Meeting - (17)
I. Summary:
1. report the result of back-projection of "2D img to 3D world"
2. Need to pay more attention to make the system be more automatic in choosing the know point pairs
1. report the result of back-projection of "2D img to 3D world"
2. Need to pay more attention to make the system be more automatic in choosing the know point pairs
Tuesday, 8 July 2014
Estimate Homography to do 2D to 3D integration (39): Successfully implement the Ray-Tracing method
I. Debugging my proposed method of "Ray-Tracing" which is not stable in viewpoint_change case
II. Making video records of the results of "2D Img-to-3D world"
II. Making video records of the results of "2D Img-to-3D world"
Monday, 7 July 2014
Sunday, 6 July 2014
Estimate Homography to do 2D to 3D integration (37): Reading - Ray Tracing & 3D Coordinate from 2D calibrated camera
I. Debugging the Ray-Tracing codes...
II. Slightly alter the strategy in finding the hitting position in the 3D_Woold_Coord
II. Slightly alter the strategy in finding the hitting position in the 3D_Woold_Coord
Friday, 4 July 2014
Estimate Homography to do 2D to 3D integration (36): Reading - Ray Tracing & 3D Coordinate from 2D calibrated camera
I. Modified code:
- To set up a range for searching, instead of finding upper_bound and lower_bound of fZ in the World-Coord-Sys, using a method like "Bag-of-word strategy"
- Trying to decrease the complexity in finding the corresponding fZ in the Webcam-Coord-Sys
- Trying to think a better way to compute the similarity in order to find the most similar 3D point in the World-Coord-Sys
- To set up a range for searching, instead of finding upper_bound and lower_bound of fZ in the World-Coord-Sys, using a method like "Bag-of-word strategy"
- Trying to decrease the complexity in finding the corresponding fZ in the Webcam-Coord-Sys
- Trying to think a better way to compute the similarity in order to find the most similar 3D point in the World-Coord-Sys
Thursday, 3 July 2014
Estimate Homography to do 2D to 3D integration (35): Reading - Ray Tracing & 3D Coordinate from 2D calibrated camera
I. Draft code of 3D coordinate from 2D calibrated camera:
One of the reference of Pseudo-code: http://en.wikipedia.org/wiki/3D_pose_estimation
One of the reference of Pseudo-code: http://en.wikipedia.org/wiki/3D_pose_estimation
Wednesday, 2 July 2014
Summary of Weekly Meeting - (16)
I. Expected work and results before next Meeting:
- Accomplish "2D-ImgPlane-Coordinate backTo 3D-world-Coordinate" process (...base on "Ray_Tracing" technique)
- Showing the known points as well as testing points at the corresponding position in the Kinect 2D image (...i.e., 3D-world-coordinate to 2D-projection-coordinate related Kinect)
- Do "Feature extraction" process, and use these features to substitute current points which are chosen manually (...i.e., the crosses located on a chessboard)
II. Draft codes:
- Output the projection coordinates and world coordinate information of each pixels from OpenNI2 (...C++ part)
- Save the Kinect outputs, including projection coordinates & world coordinates, of each pixels to be a mat-file.
- Automatically find the corresponding position in the world coordinate to the known a 2D point.
- Check the part of "Transform from 3D-world -> 3D-Webcam"
- Check the part of "3D-Webcam-picture -> Transform from 3D-world"
- Check the part of "2D-Webcam-picture -> 3D-Webcam"
* "Xc_inv_show" is the original result after the transition, but it's just a "unit", this needs to press the "z_var"
* "z_var" is as a scaling factor which points out where will be the hitting position of the first point = (0.12235, -0.066628, 1))
- Make a draft of "Ray_Tracing" and think about how to check every "Z-coordinate" brightly without searching exhaustly.
- Accomplish "2D-ImgPlane-Coordinate backTo 3D-world-Coordinate" process (...base on "Ray_Tracing" technique)
- Showing the known points as well as testing points at the corresponding position in the Kinect 2D image (...i.e., 3D-world-coordinate to 2D-projection-coordinate related Kinect)
- Do "Feature extraction" process, and use these features to substitute current points which are chosen manually (...i.e., the crosses located on a chessboard)
II. Draft codes:
- Output the projection coordinates and world coordinate information of each pixels from OpenNI2 (...C++ part)
- Save the Kinect outputs, including projection coordinates & world coordinates, of each pixels to be a mat-file.
- Automatically find the corresponding position in the world coordinate to the known a 2D point.
- Check the part of "Transform from 3D-world -> 3D-Webcam"
- Check the part of "3D-Webcam-picture -> Transform from 3D-world"
- Check the part of "2D-Webcam-picture -> 3D-Webcam"
* "Xc_inv_show" is the original result after the transition, but it's just a "unit", this needs to press the "z_var"
* "z_var" is as a scaling factor which points out where will be the hitting position of the first point = (0.12235, -0.066628, 1))
- Make a draft of "Ray_Tracing" and think about how to check every "Z-coordinate" brightly without searching exhaustly.
Subscribe to:
Posts (Atom)