ABOUT
The objective of this work is to create a NeRF model using the PyTorch framework in order to be able to represent highly complex 3D scenes using a fully-connected neural network for generating novel views based on a partial set of 2D images. This work aims to experiment with complex real-world image data. Specifically, the project explores the results produced by object scale modification, modifying the camera distance from the object, representing multiple objects in one scene, representing reflective and transparent objects, and changing the angle at which initial input images are captured.
Overall, it was found that the object depth representation plays an influential role in determining the NeRF synthesized results. Also, object scale modification, modifying the camera distance from the object, representing multiple objects in one scene, representing reflective and transparent objects, and changing the angle at which initial input images are captured all play an important role in determining the quality of the NeRF synthesized results.
​
NOTE: This is a project from CSCI 5980 - Spring 2023!
​
References:
[1] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and RenNg. Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv, 2020.