Camera tracking allows you to match the video footage previously filmed with your architectural animation rendered,
Sometimes, a floor plant is enough. Other times, a simple perspective of volumes meets our needs. Next step could be the development of a view with many details. QTVR technology (we will talk about it in a few weeks) allows us to generate an interactive and relatively “immersive” scene.
If we take another step, we can make an animation of the project.
The video montage is to de animation as the photomontage to the perspective. To make it possible, we have to use camera tracking techniques.
There are various degrees or possibilities. On an actual recording in which there is no camera movement, we can compose an image of the building. Integrate it into a real scene in which “things happen”. You can move the trees with wind, some real character walks in front of the building, etc…
Today we are going to “even harder”
On an actual recording in which the camera moves, we do video montages with our nonexistent scene in the filmed reality.
Besides, this scene will be built gradually.
But let’s go step by step.
3D modeling, before camera tracking
This time, we modeled a residential building and a park. Subsequently, we included in the scene trees with different size and species, cars (parked and in motion) and some people cycling.
Motion capture for camera tracking
The most complicated part of the experiment is the translation of the movement of the real camera to the 3D program.
This technique consists of the following: from differences in frame between a frame and the followings, the program establishes a cloud of key points and tracks in the time-space thereof making possible the camera tracking.
From these data, it is able to generate a database that is transferred to the 3D program and the camera moves as the camera recorder of the real images. It is magical … almost. The fact is that this wonderful process is not so. I wish it were, but the truth is that it takes multiple attempts to achieve near-perfect match.
Rendering process
Once the matching camera movement is achieved, we decided not to limit ourselves to a simple animation. Therefore, we realized a process of growth in both the building and in the park. For sure, the limitation came from the lighting side. Successive test approached us a credible solution to finally retouch it in post-production.
Although the reason for this test was initiating into the use of camera tracking, we decided to make a series of shots to give some dynamism to the final product.
Post-production
The 80% of the work was done. However, the 20% left was most important and laborious.
On one hand, the dynamic color correction to adjust the rendered scene with the recorded video.
On the other hand, the generation of the necessary “patches” to create the complete illusion.
One of the patches consisted of generating a mask to duplicate the surrounding buildings. These ones should overlap the final rendering. These patches are needed at various moments of the montage.
Another patch consisted of the previous placement, in the rendering process, of some fictitious planes that generated shadow over the park, simulating the thrown ones by the masked buildings.
Finally, the detail of the shadow of the helicopter. If it had not been created in post-production, the shadow had been interrupted on having been covered by the rendered 3d model.
TO SUM UP
The final result is attractive, perhaps a little long, but remember that it was a test of camera tracking.
If, as we expect, we repeat the experiment, we dare to give an advice: planning.
It is essential to consider sensationalist plans but, at the same time, they do not have to carry a work of mask impossible to solve.
This technique that we began to explore in 2011, served us to use it in project like OXALIS. We got the video footage with a drone with a very acquired result. I hope you like it,
Leave A Comment