Research

VF-NeRF: Learning Neural Vector Fields for Indoor Scene Reconstruction

Implicit surfaces via neural radiance fields (NeRF) have shown surprising accuracy in surface reconstruction. De- spite their success in reconstructing richly textured sur- faces, existing methods struggle with planar regions with weak textures, which account for the majority of indoor sur- faces. We propose to solve indoor dense surface reconstruction by replacing traditional implicit representa- tions such as the signed distance field (SDF) or surface den- sity with the recently proposed vector field (VF). VF is de- fined by the unit vector directed to the nearest surface point. It therefore flips direction at the surface, and equals the ex- plicit surface normals. Except for this flip or sign change around planar surfaces, VF remains constant and provides a strong inductive bias towards planar surfaces. We develop a novel density-VF relationship and a training scheme that allows us to learn VF via volume rendering. By doing this, VF-NeRF can model large planar surfaces without addi- tional cues such as segmentations, depth or normals. Addi- tionally, we show that, when depth cues are available, our method further improves and achieves state-of-the-art re- sults in reconstructing indoor scenes.

This work has been submitted to a top-tier computer vision conference. Code will be released soon.

Coat-MPC: Performance-driven Constrained Optimal Auto-Tuner for MPC

One of the significant challenges in Model Predic- tive Control (MPC) is the safe tuning of its cost function param- eters. In safe tuning for MPC, the goal is to find the cost func- tion parameters that maximize the system’s performance while ensuring that the performance stays consistently above a given threshold. In this context, we propose the Constrained Optimal Auto-Tuner for MPC algorithm (COAT-MPC), a method that safely explores the cost function parameters domain to reach the most performant parameters. COAT-MPC makes use of Upper Confidence Bounds (UCB) on the entire parameters’ domain as the goal for each optimization iteration, and sequentially explores the parameter space towards this goal. We present an in-depth theoretical analysis of our proposed method, establishing its safety with high probability and demonstrating provable finite-time convergence. We perform comprehensive simulations and comparative analyses with a hardware platform against classical Bayesian Optimization (BO) and state-of-the- art methods. With these experiments, we demonstrate that our approach outperforms these competitive baselines in terms of fewer constraint violations and improved cumulative regret over time in the autonomous racing scenario.

Work in progress to be submitted at a top-tier robotics conference.