The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image
The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image © Ioannis Gkioule/Shuang Zhao

Scientists investigate ‘virtual’ and ‘reality’

New research in computer graphics will advance artificial vision, 3D displays and video editing, says scientists at Harvard University, part-funded by the European Research Council.

Researchers Hanspeter Pfister and Todd Zickler investigated the gap between ‘virtual’ and ‘real’ and sought to answer the question ‘how do we see what we see?’ One project led by Zickler sought to find better ways to mimic the appearance of a translucent object, such as a bar of soap. The paper elucidates how humans perceive and recognise real objects and how software can exploit the details of that process to make the most realistic computer-rendered images possible.

The new approach focuses on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object, and, therefore, how we see what we see.

Zickler’s team first rendered thousands of computer generated images of one object with different computer simulated phase functions, so each image’s translucency was slightly different from the next. From there, a program compared each image’s pixel colours and brightness to another image in the space and decided how different the two images were. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space.

Finally, the researchers asked people to compare these representative images and judge how similar or different they were, shedding light on the properties that help us decide which objects are plastic and which are soap simply by looking at them.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognise materials,” Zickler says.

The next step will involve finding ways to accurately measure a material’s phase functions instead of making them up computationally and Zickler’s team is already making progress on this with a new system that will be presented at the ‘6th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and interactive technologies in Asia’ in December 2013.

Zickler hopes to eventually understand these properties well enough to instruct a computer with a camera to identify what material an object is made of and to know how to properly handle it – how much it weighs or how much pressure to safely apply to it – the way humans do.

This study is part of a trio of papers, including a focus on new types of screen hardware that displays different images when lit or viewed from different directions, and also digital film editing.