In geometrical optics, a well focused conventional camera is seen to project an emitted intensity of a far distant object point onto the image plane in a way that its image point is sharply focused. Given the same focus setting, another projected point from a closer object would focus behind the sensor meaning that its directional rays are distributed over several positions on the image plane causing a blurred dot on the output image. The plenoptic camera overcomes this limitation by the aid of a micro lens array and an image processing technique which is indicated below.
As highlighted in the animation, the light intensity of an image point E'0(s0) emitted from M0 is distributed over image sensor locations Efs(u0, s0), Efs(u1, s0) and Efs(u2, s0). Hence, by summing up these Efs values, the intensity at point E'0(s0) is retrieved. Upon closer inspection, it may be apparent that each image point at plane E'0 is obtained by calculating the sum of all pixel values within the respective micro image s. A problem arises in case the sum E'0(s0) exceeds the maximum intensity that can be represented. In conventional photography, this artefact is known as clipping and can be solved by lowering the exposure. The same applies to the plenoptic camera, although, it is an alternative attempt to synthesise E'0 (s0) via the average mean of all pixels Efs being involved to form E'0(s0).
As suggested by the term refocusing, another object plane, e.g. M1, may be computationally brought into focus from the same raw image capture. For instance, light rays emanating from a plane M1 can be thought of projecting corresponding image points at E'1 behind the image sensor. The intensity of E'1(s0) is then recovered by integrating Efs intensities at positions (u2, s0), (u1, s1) and (u0, s2). From this example, it is seen that pixels selected to form image points behind the sensor are spread over several micro images. In general, it can be stated that the closer the refocusable object plane Ma to the camera, the larger the gap between micro lenses from which pixels have to be merged. Recalling the second figure shown in the Sub-aperture section, an analogue representation is depicted below showing an approach to accomplish refocusing from previously extracted sub-aperture images.
While investigating the image refocusing process, the question came up what the distance to a refocused object plane Ma and its depth of field might be. A solution to this is discussed in section Distance Estimation.
C. Hahne, A. Aggoun, and V. Velisavljevic, S. Fiebig, and M. Pesch "Refocusing distance of a standard plenoptic camera," Opt. Express 24, Issue 19, 21521-21540 (2016).
The refocusing distance of a standard plenoptic photograph [Invited Paper]
C. Hahne, A. Aggoun, and V. Velisavljevic, "The refocusing distance of a standard plenoptic photograph," in 3D-TV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 8-10 July 2015.
C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. Fernández, "Light field geometry of a standard plenoptic camera," Opt. Express 22, Issue 22, 26659-26673 (2014).
C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. Fernández, "Baseline of virtual cameras acquired by a standard plenoptic camera setup," in 3D-TV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2-4 July 2014.
C. Hahne and A. Aggoun, "Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera," Proc. SPIE 9023, in Digital Photography X, 902305 (March 7, 2014).
If you have any questions, feel free to contact: info [ät] christopherhahne.de