Learning Medium and Light Independent Underwater Scenes
We address the challenge of constructing a consistent and photorealistic Neural Radiance Field (NeRF) in inhomogeneously illuminated, scattering environments with unknown, co-moving light sources. While most existing works on underwater scene representation focus on homogeneous, globally illuminated scattering mediums, limited attention has been given to such scenarios-such as when a robot explores water deeper than a few tens of meters, where sunlight becomes insufficient. To address this, we propose a novel illumination field that is locally attached to the camera, enabling the capture of uneven lighting effects within the viewing frustum. We combine this with a volumetric representation of the medium to an overall method which effec tively handles the interaction between the dynamic illumination field and the static scattering medium. Evaluation results demonstrate the effectiveness and flexibility of our approach.
The global NeRF MLP learns both density and color at each ray sample in the world coordinate frame. Simultaneously, each ray sample is transformed into the local camera coordinate frame, with surface normals derived from the predicted density field, allowing the local illumination field MLP to estimate the light intensity factor that the sample point receives. Finally, medium-related parameters, that is the attenuation coefficient , medium color , and backscatter are jointly estimated and integrated together into a underwater volume rendering formulation.