2024 Volume 33 Issue 11
Article Contents

Jianying Zhu, Yong Bi, Minyuan Sun, Weinan Gao. Rapid hologram generation through backward ray tracing and adaptive-resolution wavefront recording plane[J]. Chinese Physics B, 2024, 33(11): 114204. doi: 10.1088/1674-1056/ad7c2e
Citation: Jianying Zhu, Yong Bi, Minyuan Sun, Weinan Gao. Rapid hologram generation through backward ray tracing and adaptive-resolution wavefront recording plane[J]. Chinese Physics B, 2024, 33(11): 114204. doi: 10.1088/1674-1056/ad7c2e

Rapid hologram generation through backward ray tracing and adaptive-resolution wavefront recording plane

  • Received Date: 13/07/2024
    Accepted Date: 04/09/2024
    Available Online: 01/10/2024
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(7)  /  Tables(2)

Article Metrics

Article views(380) PDF downloads(1) Cited by(0)

Access History

Rapid hologram generation through backward ray tracing and adaptive-resolution wavefront recording plane

Abstract: An advanced method for rapidly computing holograms of large three-dimensional (3D) objects combines backward ray tracing with adaptive resolution wavefront recording plane (WRP) and adaptive angular spectrum propagation. In the initial phase, a WRP with adjustable resolution and sampling interval based on the object’s size is defined to capture detailed information from large 3D objects. The second phase employs an adaptive angular spectrum method (ASM) to efficiently compute the propagation from the large-sized WRP to the small-sized computer-generated hologram (CGH). The computation process is accelerated using CUDA and OptiX. Optical experiments confirm that the algorithm can generate high-quality holograms with shadow and occlusion effects at a resolution of 1024 × 1024 in 29 ms.

1.   Introduction
  • Holographic technology can fully reproduce the light wave information of an object to satisfy all aspects of human stereoscopic vision, thus being regarded as a true 3D display technology. Computer-generated holograms (CGHs) are interference patterns calculated by simulating the definition, propagation, and diffraction of light waves from a virtual object using a computer. Unlike conventional holograms, CGHs do not require a recording medium or a complex process for coherent generation. However, they require a significant amount of computational time, which limits their application in real-time 3D display systems.

    In order to address the significant computational cost, several acceleration algorithms have been proposed, mainly divided into point-source-based algorithms,[14] polygon-based algorithms,[58] and layer-based algorithms.[911] Additionally, deep learning is a major research direction in computational holography. Deep-learning-based CGH methods can offer high-speed, high-quality image reconstruction.[1216] For instance, a recent study introduced an autoencoder-based neural network for phase-only hologram generation, achieving high-fidelity 4K resolution holograms in 0.15 seconds with fewer speckles compared to existing methods.[17] Point-source CGH is the simplest and most practical algorithm for computing holograms.[13] The current acceleration algorithms for point-source CGH mainly include the look-up table (LUT) method[18,19] and the WRP method.[4] The method that combines the LUT and the WRP achieves higher computational speed.[20] Moreover, the computational framework of point-source CGH is inherently compatible with parallel computing architectures, particularly those utilizing hardware acceleration techniques.[3,21] However, the standard 3D point cloud formulation does not support many basic effects such as occlusion and shading.[22] Moreover, ray tracing enables realistic reconstruction with shadows, highlights, and global illumination.[2325] Consequently, several studies have explored point-source holography algorithms that integrate WRP with backward ray tracing.[26] To generate CGH for large objects, a recent study proposed a ray tracing CGH algorithm based on multiple off-axis WRPs.[27] This approach achieved a reconstruction speed of 9.8 FPS using 2 × 2 WRPs and shifting ASM propagation. However, its drawback is apparent: the computation time scales with the object’s volume. Larger objects necessitate multiple WRPs, significantly increasing computational costs.

    To address the aforementioned challenges, we propose a high-speed hologram computation method for large objects that combines backward ray tracing with adaptive resolution WRP and adaptive angular spectrum propagation. This method ensures computational speed is independent of the reconstructed image size while achieving high-quality holographic reconstructions with realistic lighting, shadows, and occlusion effects. By leveraging OptiX and CUDA for accelerated computation, we can generate CGHs with a resolution of 1024 × 1024 for large 3D objects in just 29 ms. A 25 FPS holographic reconstruction video provided in the appendix demonstrates the effectiveness of the algorithm.

2.   Proposed method
  • The algorithm operates in two steps: initially, backward ray tracing combined with an illumination model is used to obtain the bigger-sized WRP near the object. In the second step, a smaller-sized CGH is calculated using an adaptive angular spectrum diffraction method.

  • In ray tracing, each pixel on the CGH surface acts as a ray emitter that emits rays in fixed directions around it. When these rays intersect with the surface of an object, a new ray is generated at the intersection point based on the illumination model, reflection laws, and refraction laws. This process continues recursively until the ray either intersects with a light source or reaches the maximum recursion depth. The complex amplitude contribution of the ray to the emitting pixel is calculated based on the distance between the intersection point and the emitter.

    Here, N represents the number of rays, j represents the sequence number of the ray, Aj represents the intersection amplitude calculated from the illumination model, and rj represents the distance between the intersection point and the ray emitter. Figure 1 illustrates a comparison between conventional point-source ray tracing and backward ray tracing.

    The complex amplitude of the WRP can be calculated using Eq. (1), replacing UCGH (x,y) by UWRP(x,y). A schematic diagram of conventional WRP and backward ray tracing based WRP is given in Fig. 2.

    In the conventional WRP approach, the maximum diffraction angle θ for reconstructing a 3D object from the CGH can be expressed as

    The radius Wj on the WRP is the following equation:

    As a practical matter, θ can be smaller. As shown in Fig. 2,

    where W represents half of the CGH width. Meanwhile, equation (4) can be rewritten as

    Since zj is small compared to d, Wj is also small compared to W. Therefore, a lot of computation is reduced.

    The number of rays emitted per pixel is determined by the angular range and angular interval, where the angle range is [−θo,θo]. In the horizontal direction,

    where Δθ = p/zj. p represents the object sampling interval. Based on the actual application scenario and the angular resolution of the human eye, we set p to 0.1 mm. The distribution of rays in the vertical direction is the same as in the horizontal.

  • The second step involves diffraction calculation from the large-sized WRP to the small-sized CGH. There are various methods for diffraction calculation between parallel planes with different sampling intervals. As early as 2012, Weng et al. utilized shifted Fresnel diffraction[28] to perform holographic calculations of the WRP method for objects exceeding the size of the CGH.[29] In addition, there are improved algorithms based on the angular spectrum propagation method, as described in scaled angular spectrum method[30] and adaptive-sampling angular spectrum method.[31] Compared to the Fresnel diffraction method, the improved angular spectrum method does not rely on the paraxial approximation and overcomes the limitations of the angular spectrum method in far-field applications. It also offers more flexible sampling intervals and ranges. In subsequent calculations, we will build upon this approach.

    In general ASM, using FFT results in circular convolution instead of linear convolution, which introduces errors. To mitigate these errors, zero-padding of the input optical field is typically required. From a physical perspective, as shown in Fig. 3, FFT extends the input optical field periodically, and this extended field can affect the output field. Therefore, zero-padding the input field is necessary to reduce interference. The required zero-padding range t is given by the following formula:

    where S1 and S2 represent the ranges of the source plane and the destination plane, respectively. Δ1 and Δ2 are the sampling intervals of the source plane and the destination plane, respectively. z is the diffraction distance. θ is the maximum diffraction angle, which can be obtained from Eq. (3).

    In the frequency domain, zero-padding effectively reduces the sampling interval. Thus, non-uniform fast fourier transform (NUFFT) can be used to directly achieve the corresponding smaller sampling interval without the need for actual zero-padding, thereby reducing the number of samples required for the transformation and increasing computational speed.

    Based on the required zero-padding range t, the sampling interval of the transfer function can be obtained

    Additionally, as shown in Fig. 3, the maximum sampling frequency is given by

    where L = (S1+S2)/2. In addition, according to the Nyquist sampling theorem, fmax < 1/(2Δ1). Therefore, fmax should be taken as the smaller value of the two. The number of samples M is given by M = ⎡2fmax/Δf⎤. Here, ⎡⋅⎤ represents the operation of rounding up to the nearest integer.

    After obtaining the required number of samples, the diffraction optical field can be calculated using NUFFT and INUFFT transformations. The third type of NUFFT, accelerated by the fast Gaussian gridding method proposed by Greengard and Lee,[32] was used,

    Using the above formulas, the diffraction calculation from the large-sized WRP to the small-sized CGH can be achieved. Compared to the traditional FFT-based diffraction methods, the use of NUFFT inevitably increases the computational complexity. However, NUFFT can adaptively adjust sampling parameters, thereby avoiding zero-padding, which reduces computational complexity in another aspect. Overall, for diffraction calculations involving objects of a size close to that of the CGH, the computation times of both methods are comparable, as evidenced in several previous NUFFT-based diffraction algorithms.[31,33] When dealing with larger objects, NUFFT-based algorithms allow for flexible adjustment of sampling parameters, resulting in a computational time reduction proportional to the ratio of the sampling intervals between the object plane and the target plane.

3.   Experiment
  • To demonstrate the efficiency of the proposed algorithm, we compared its computational efficiency with the MO-WRP method proposed in Ref. [27], which also utilizes backward ray tracing and is suitable for generating CGHs of large objects. The computational platform for both algorithms consisted of a computer running Microsoft Windows 10, equipped with an AMD Ryzen 5 3600 6-core processor (3.60 GHz), 32 GB of memory, and Microsoft Visual C++ 2017 (Intel C++ Compiler version 12.1). Additionally, we used an Nvidia GeForce RTX 2080 Ti GPU board, OptiX 6.5, and CUDA 10.1 for GPU programming. Table 1 shows the relevant CGH computation parameters.

    In the experiments, we set the area of the reconstructed image to be 1, 4, and 9 times the area of CGH (corresponding to N = 1, 2, 3, respectively). We selected a teapot model (41472 points, 16188 polygons) as the object to compare the computation times of the two methods. The model is shown in Fig. 4. The model size was adaptively adjusted to match the dimensions of the selected WRP. In the MO-WRP algorithm, the corresponding number of WRPs was 1 × 1, 2 × 2, and 3 × 3, respectively.

    The calculation time of these algorithms running on a GPU is shown in Table 2 and Fig. 5.

    As can be seen, the computation time of the MO_WRP algorithm increases with the size of the reconstructed image, while the proposed algorithm is almost unaffected by this aspect.

    Figure 6 illustrates the optical setup for holographic reconstruction. A 532-nm green laser is collimated by a beam expander (Thorlabs GBE20-A) and directed onto the SLM (Holoeye GAEA-2) loaded with the CGH. After passing through a 4f system to filter out the zero-order light, a lensless camera (Nikon D90) is placed on the reconstruction plane to record the reconstructed images. Figure 7 shows optical reconstruction images from different angles. The highlights, shadows, and occlusion effects on the teapot are clearly visible, demonstrating the effectiveness of the algorithm. Additionally, a dynamic reconstruction video of the teapot model at 25 FPS is provided in supporting information (see visualization 1), further confirming the high-speed performance of the algorithm.

4.   Conclusion
  • This study proposes an advanced method combining backward ray tracing and adaptive resolution WRP for rapidly computing large-sized objects. Optical experiments confirm that the algorithm can generate high-quality holograms with shadows and occlusion effects at a resolution of 1024 × 1024 within 29 ms. This algorithm holds promising potential for future applications in dynamic holography.

Figure (7)  Table (2) Reference (33)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return