Software

Time of Flight Tracer (toftracer)

Typical ray tracers keep track of the radiances of rays of light as they are transported through objects between light sources and an observer.  The radiances of all rays that impinge on an observer from a given direction are combined regardless of their path lengths.

Unfortunately they do not keep track of the time-of-flight or path lengths of the rays.  The latter is needed to produce accurate depth images from ToF depth cameras, particularly when the transport phenomena include multipath and subsurface scattering.

Toftracer is a ray tracer that keeps track of the time-of-flight of each ray, even in the presence of multipath and subsurface scattering.  For all rays that impinge on the observer from a given direction, toftracer produces a distribution of radiances as a function of the rays’ path lengths or time-of-flight. [1]

  1. Phil Pitts, Arrigo Benedetti, Malcolm Slaney, and Phil Chou, “Time of Flight Tracer,” no. MSR-TR-2014-142, 8 November 2014.

Software is available here.

MPEG Point Cloud Codecs (V-PCC and G-PCC)

The software used to develop the MPEG Point Cloud Codec (PCC) is called a Test Model.  Prior to January 2017, there were three Test Models, one for Category 1 (Static Objects and Scenes), one for Category 2 (Dynamic Objects), and one for Category 3 (Dynamic Aquisition).  These were known as TMC1, TMC2, and TMC3.  After January 2017, TMC1 and TMC3 were merged into a single Test Model known as TMC13.  TMC1 was implemented in Matlab. The TMC1 Matlab code was ported to C++ and integrated into the existing TMC3 C++ code.  Some of the generic C++ code in TMC13 continues to refer to TMC3 only for these historical reasons.

Currently there are two versions of PCC: video-based PCC (V-PCC), whose test model is TMC2, and geometry-based PCC (G-PCC), whose test model is TMC13.  V-PCC is seen as a shorter term solution designed to leverage existing investments in video (such as decoder chips in mobile devices), while G-PCC is seen as a longer term solution with greater potential for compression as well as a broader class of application. [1]

  1. S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. S. Cesar, P. A. Chou, R. A. Cohen, M. Krivokuća, S. Lasserre, Z. Li, J. Llach, K. Mammou, R. Mekuria, O. Nakagami, E. Siahaan, A. Tabatabai, A. Tourapis, and V. Zakharchenko, “Emerging MPEG Standards for Point Cloud Compression,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) special issue on immersive video, March 2019.

Software is available here.  An MPEG password may be required.

Learned Volumetric Attribute Compression (LVAC)

This is the first work to compress volumetric functions represented by local coordinate-based, or implicit, neural networks.  A local coordinate based network is a neural network whose input comprises both a position within a local block and a latent vector for the block; and whose output is the value of a scalar or vector-valued function over the block.  Though our work will have application to compressing volumetric functions such as neural radiance fields, here we apply it to compressing the attributes of points in a point cloud given the geometry (i.e., positions) of the points in the point cloud.  The encoder encodes the volumetric function, and the decoder decodes the volumetric function and then decodes the attributes by evaluating the decoded function on the given geometry [1, 2].

  1. B. Isik, P. A. Chou, S. J. Hwang, N. Johnston, and G. Toderici, “LVAC: Learned Volumetric Attribute Compression for Point Clouds using Coordinate Based Networks,” Google Research Technical Report, November, 2021.  [arxiv]
  2. B. Isik, P. A. Chou, S. J. Hwang, N. Johnston, and G. Toderici, “LVAC: Learned Volumetric Attribute Compression for Point Clouds using Coordinate Based Networks,” J. Frontiers in Signal Processing, September 2022. [supplementary material]

Software is available here.