This project was developed as part of my work for the course *IFT6145 Vision 3D* at the Université de Montréal in winter 2011.
It is based on the work of Shum and He in their paper Rendering with concentric mosaics. The procedure works as follows : starting with two cameras, on in pure rotation and the other moving along a circle of radius *r*, we would take snapshots so as to cover >360°. Then, the goal is to detect keypoints in overlapping pictures, use bundle adjustment to infer the parameters of the camera, which we use to reproject the image using a cylindrical coordinate system. By taking vertical slits from different snapshots and blending them together, we are able to obtain a complete panorama viewed from two different points. It then allows us to do fancy things, such as projecting both pictures onto a circular screen, thus experiencing a real 3D panorama.

Anyway, if you’re interested to read more, you shall find a report (in French) of my work here, along with the source code of the project.