There is another technique to set the far parallax, that is prefered by some stereographers because it avoids the loss of a stripe of pixels on the left and right parts of the image, when one shifts one image in relation to the other in post-production :
Rather than shifting one image in relation to the other, we may choose to angulate a camera, which is panning it just a little bit (in most cases less than 1° is enough). This is also called camera convergence.
But be careful to not mix up camera convergence and the convergence of the eyes: they're completely independent: when I look at a 3D image, I keep the freedom to converge on any object in the image, not only the object on which the cameras converge (that will appear on the screen plane). Furthermore, camera convergence and the convergence of the eyes do not serve the same function: camera convergence defines the screen plane and adjust the far parallax; the convergence of the eyes allow the fusion of the two images of an object by the brain - and the brain will deduce from the angle of convergence of the eyes how far is that object.
To avoid that confusion, we prefer to speak about the angulation of the cameras.
When we angulate a camera, we do a small pan. A horizontal pan shifts all of the image in one bloc to the left or to the right - unlike a dolly move, the perspective do not change - it is exactly the same thing than moving horizontally a photo in front of our eyes… or exactly the same thing than shifting horizontally the image in post-production!
We can deduce that : Angulating a camera = shifting in post-production.
We can also deduce that there is no real point in angulating both cameras (and this requires a very high precision, since to make a total angulation of 0,1°, for exemple, you'll need to pan precisely each camera of 0,05°) - the only real interest is to keep a 3D steadicam well-balanced, for exemple.
When we shoot, this is, again, the FIRST STEP: ADJUSTING THE FAR PARALLAX
So, when we shoot convergent:
- Before touching the interaxial, we look for the farthest object in the scene on the Stereo 3D compositing monitor, then we angulate a camera until the two images of that object have the wanted parallax: for exemple, +1% (that's 19 pixels in Full HD, 20 pixels in 2K).
- Be careful: if that object isn't at infinity (which is often the case when shooting indoor), when I will increase my interaxial, the far objects will tend to come slightly closer. So after changing the interaxial, we may re-adjust, if we want to, the far parallax a second time to push it back to the Artificial Horizon.
We said earlier that angulating a camera = shifting in post-production. This is not completely true, because each method has its own qualities and defects:
- When we shoot parallel with a post-production shift, we lose a stripe of pixels on the right and on the left side of the image. This means we have to crop the image (changing the aspect ratio), or to do a small digital zoom in the image, in most cases a zoom of 102% or 103%.
- When we shoot convergent, because a camera is angulated, one camera actually films a rectangle, and the other a trapezoid. A slight keystoning effect appears in the angles of the image. These distortions of the image (that some stereographers think negligeable in most shots), can become tiring for the audience, and need to be corrected in post-production, using grid-based transformation effects.
- In all cases, do not forget that the image will need to be shifted in post-production if we want to adapt our film to different screen sizes.