Triangulation for Image Pairs (Cyrill Stachniss)

Published 2020-09-21
Triangulation of 3D Points based on Pairs of Camera Images
Cyrill Stachniss, 2020

Corrections:
11:56 Mistake in the brackets in the last row, right-hand side. Remove inner ][

All Comments (21)
  • @nigelpluto3443
    thanks for making this nice explanation public and freely accessible
  • @letatanu
    I almost watch all the videos from Prof. Stachniss. Thank you for your lecture.
  • @byynee
    Finally it’s here. Was waiting for this!
  • @CyrillStachniss
    Corrections: 11:56 Mistake in the brackets in the last row, right hand side. Remove inner ][
  • @vikasshetty6725
    very well explained. waiting for a video on sensor fusion of camera images and 3D point cloud
  • D. Cyrill, Does this algorithm is used to generate the DSM ( dense surface model ) as a point cloud ? if not, which one does the photogrammetric software such as PhotoModeler use ?
  • @SpatialAIKR
    I guess the equations in 9:01 are incorrect.. Even the professor Cyrill indicated that the equation (f-g)⋅r should be (g-f)⋅r, the following equations are weird. I guess the following equations would be (q + μs - p - λr)⋅s = 0 and (q + μs - p - λr)⋅r = 0.
  • Thank you for the great lecture. I think that Matlab implementation for triangulation use SVD which is a linear solution for that, do you know any other implementation that offers non-linear solution for triangulation and you've used it in your lab maybe?!
  • Another question I had is about hand eye calibration. I've tried to capture images from a pattern and the same time record the position of the robot , however I was expected to get one fix result but it's not the case! obviously the transformation between the robot and camera coordinate system is fixed but can slightly be different I think in x element of translation bc it depends on focal length! I've tried to capture images in a range(movements 1-4cm from the pattern) but the estimated transformation seems to have the best result for the middle of the range! would you please shed some light on this? I cannot end up with an estimated transformation can have good results in different distances!
  • @CyrillStachniss Thanks for the great video professor. I have a question on quality of triangulation? Is there a way I can estimate the uncertainty or the covariance matrix of the triangulated point? The lines may not perfectly intersect (due to noise in relative poses) and the pixel sizes could define a larger unprojected area. Is there any source where I can learn about how I can encode this uncertainty as a covariance matrix? You do show this for the 2 view case, is there a way to estimate this for the multiview case?
  • @eigenb6455
    Shouldn't the right hand side of the matrix form of the equation in 12:02 be one column vector? The ] [ brackets between the transposed vectors and r, s shouldn't be there.
  • @AliDeeb-wh3il
    Thank you Cyrill for this streamlined explanation, but can I ask you about the name of the reference or paper you took that from?
  • @janghopark4637
    I have a question on "Absolute orientation". If we can estimate 3D points from stereo camera (=we know the baseline) and control points w.r.t. global frame, then we don't need to estimate "scale" parameter right? In this case, 6 DoF?
  • @afaqsaeed622
    How can one find the camera constant c for a real camera during the calibration process. It would be a great help if anyone could answer that
  • I really like these courses but someone can tell me why there are advertizings every 3 to 4 mn ? It is really annoying and was not the case before...
  • @AmirSepasi
    this lecture I didnt like. many points in it was not clear enough as they were in other lectures