http://www.mathworks.co.uk/help/matlab/ref/print.html
http://stackoverflow.com/questions/12160184/how-to-save-a-figure-in-matlab-in-from-the-command-line
II. Experimental setup:
。Target
。First scenario - near
。Second scenario - medium
。Third scenario - Far
III. We collect these datas:
* 'k-Img' means a Kinect frame; 'hh-Img' represents a Hand-held camera image
。3 k-Imgs under near, medium, and far distance to the target ... (for matching tasks)
。3 sets of k-Imgs under near, medium, far, which are taking under different perspective
around 360 degree ... (for stitching tasks)
。5 hh-Imgs with different scales
。5 sets of hh-Imgs under 5 different scales, which are taking under different perspective
around 360 degree
。10 hh-Imgs with simple affine transformation
。16 hh-Imgs under 2 different scales, which are taking under different perspective
around 360 degree
IV. Running the re-setup experiment: (matching tasks - for checking SIFT descriptors capability in
our cases)
* 'k-Img' means a Kinect frame; 'hh-Img' represents a Hand-held camera image
。1 k-Img (totals are 3 under different scales) V.S. 5 hh-Imgs (under 5 different scales)
。1 k-Img (totals are 3 under different scales) V.S. 10 hh-Imgs (with simple affine)
。1 k-Img (totals are 3 under different scales) V.S. 16 hh-Imgs under different
perspective (per dataset; totals are 2 sets of hh_Imgs under 2 different scales)
。1 k-Img (totals are 3 under different scales) V.S.
24 hh-Imgs with rotation (per dataset; totals are 5 sets of hh_Imgs under 5 different scales)
IV. Results... paste tomorrow:
http://stackoverflow.com/questions/12160184/how-to-save-a-figure-in-matlab-in-from-the-command-line
。Target
。First scenario - near
。Second scenario - medium
。Third scenario - Far
III. We collect these datas:
* 'k-Img' means a Kinect frame; 'hh-Img' represents a Hand-held camera image
。3 k-Imgs under near, medium, and far distance to the target ... (for matching tasks)
。3 sets of k-Imgs under near, medium, far, which are taking under different perspective
around 360 degree ... (for stitching tasks)
。5 hh-Imgs with different scales
。5 sets of hh-Imgs under 5 different scales, which are taking under different perspective
around 360 degree
。10 hh-Imgs with simple affine transformation
。16 hh-Imgs under 2 different scales, which are taking under different perspective
around 360 degree
IV. Running the re-setup experiment: (matching tasks - for checking SIFT descriptors capability in
our cases)
* 'k-Img' means a Kinect frame; 'hh-Img' represents a Hand-held camera image
。1 k-Img (totals are 3 under different scales) V.S. 5 hh-Imgs (under 5 different scales)
。1 k-Img (totals are 3 under different scales) V.S. 10 hh-Imgs (with simple affine)
。1 k-Img (totals are 3 under different scales) V.S. 16 hh-Imgs under different
perspective (per dataset; totals are 2 sets of hh_Imgs under 2 different scales)
。1 k-Img (totals are 3 under different scales) V.S.
24 hh-Imgs with rotation (per dataset; totals are 5 sets of hh_Imgs under 5 different scales)
IV. Results... paste tomorrow:
No comments:
Post a Comment