Egomotion 3D: Exploring Algorithms for 2D to 3D Scene Extrapolation

  • Thread starter Superposed_Cat
  • Start date
  • Tags
    2d 3d
In summary, it seems that there is no one algorithm that is universally accepted for extrapolating 3D data from 2D images. Some researchers are working on solutions using panoramic laser rangefinders, while others are looking into more traditional methods like projection.
  • #1
Superposed_Cat
388
5
:ey all, I'm seeking to write an egomotion program and want to do research into existing algorithms and the math of it, but googling "egomotion algorithms" doesn't seem to turn anything up, is there a more popular name for extrapolating 3d scen from 2d scene? Or can anyone post a link? Any help apreciated,
 
Technology news on Phys.org
  • #2
Do you have to extrapolate from just a 2D image? That puts some limitations on you that you wouldn't have if you used a solution like Kinect, that builds the 3D scene from laser points. In fact the best solutions are generally going to involve projecting light of some form.

As far as getting 3d info from 2d, Android has an interesting app, you can see its output here (if you have a browser with webgl)
https://www.chromeexperiments.com/experiment/android-lens-blur-depth-data-player
http://www.clicktorelease.com/code/depth-player/
That one uses no projection, a single camera (from phone) and refocuses it I believe to get depth data based on which areas are sharp and blurry at each point of focus. If you had a good robot, where its wheels never slip or anything, you could potentially add average in snapshots from other position if the environment is stationary, to get an even better picture.
 
  • #3
Egomotion determination from 2D picture is call SLAM (simultanéous localisation and mapping), Monocular SLAM. The best class of these algorithm is BUNDLE ADJUSTEMENT which is an non linéar optimisation of reprojection error of feature. You can search for downloadable algorith on MRPT library, OpenSLAM, LSD Slam... I am an old computer vision PHD, if you have question, ask me.
 
  • #4
kroni said:
Egomotion determination from 2D picture is call SLAM (simultanéous localisation and mapping), Monocular SLAM. The best class of these algorithm is BUNDLE ADJUSTEMENT which is an non linéar optimisation of reprojection error of feature. You can search for downloadable algorith on MRPT library, OpenSLAM, LSD Slam... I am an old computer vision PHD, if you have question, ask me.
A computer vision PhD? Actually I have a question: Do you know or have any educated guesses about what the spinning sensor on the Boston Dynamics robots is? Video here:

I can make guesses, that its maybe projecting some sort of plane and calculating depth by offset with a camera, but I don't know.
 
  • #5
Yes, This spinning sensor is a panoramic laser range finder, may be a Riegl, or a Velodyne. It'is composed of 32 or 64 laser verticaly and when the sensor spin it give a point cloud représenting the scene. I work with this kind of sensor in my research.
 
  • Like
Likes Fooality
  • #6
kroni said:
Yes, This spinning sensor is a panoramic laser range finder, may be a Riegl, or a Velodyne. It'is composed of 32 or 64 laser verticaly and when the sensor spin it give a point cloud représenting the scene. I work with this kind of sensor in my research.
Ah! If I'm not mistaken then, the principle behind it has actually been around for awhile, but it still must be best suited for the job, if they're using it. Fascinating, and thanks for your reply!
 

Related to Egomotion 3D: Exploring Algorithms for 2D to 3D Scene Extrapolation

1. What is Egomotion 3D from 2D?

Egomotion 3D from 2D is a computer vision technique that uses 2D images or video frames to estimate the 3D motion of a camera or observer in a scene. It is also known as visual odometry or structure from motion.

2. How does Egomotion 3D from 2D work?

Egomotion 3D from 2D works by analyzing the changes in appearance and position of key points or features in consecutive 2D images or video frames. These changes are used to calculate the camera's translation and rotation, which then allows for the reconstruction of a 3D representation of the scene.

3. What are the applications of Egomotion 3D from 2D?

Egomotion 3D from 2D has many applications in robotics, augmented reality, autonomous vehicles, and virtual reality. It is also used in video stabilization and motion tracking in sports and entertainment.

4. What are the advantages of using Egomotion 3D from 2D?

Using Egomotion 3D from 2D allows for quick and accurate estimation of camera motion, without the need for additional sensors or markers. It can also handle challenging environments with varying lighting conditions and cluttered scenes.

5. What are the limitations of Egomotion 3D from 2D?

Egomotion 3D from 2D relies heavily on the quality and consistency of the features in the images or video frames. It can also be prone to errors in cases of large camera movements or when the scene lacks distinct features. Additionally, it cannot accurately estimate the scale of the scene without additional information.

Similar threads

  • Programming and Computer Science
Replies
1
Views
1K
  • Differential Geometry
Replies
5
Views
4K
Replies
2
Views
982
  • Programming and Computer Science
Replies
29
Views
3K
  • STEM Academic Advising
Replies
6
Views
934
  • General Engineering
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
4K
  • Aerospace Engineering
Replies
7
Views
2K
  • General Math
Replies
1
Views
2K
  • Special and General Relativity
Replies
6
Views
1K
Back
Top