This project focuses on applying neural radiance fields to obtain 3D spatial data for holography applications.
This project employs a PyTorch-based orthogonal NeRF model, and utilizes the function's density output as a data filter to gather 3D spatial data. This is achieved by training the network on a dataset of scene view images, and then using the trained network to predict the 3D point cloud for the given input scene.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
For this project, you will need Python 3.8 or greater. To install the
Python libraries necessary, run the installs via pip command.
pip install -r requirements.txtThis project relies on a trained NeRF model. For more information on training a NeRF model for a scene, see the documentation.
To generate a colored depth map of the scene, run the following command.
python3 mapping.py