Fabio Pellacini

Full Professor of Computer Science
University of Modena and Reggio Emilia


Since the Winter of 2023, I am a Full Professor in Computer Science at the University of Modena and Reggio Emilia, where I work on Computer Graphics methods to solve content creation and design problems. Before then I was an Associate Professor (2011–2017) and then Full Professor (2017–2023) in Computer Science at Sapienza University of Rome, and an Assistant Professor (2005-2009) and then an Associate Professor (2009-2011) in Computer Science at Dartmouth College, an Ivy League University. Before Dartmouth I was a Visiting Assistant Professor (2004-2005) in Computing and Information Science at Cornell University. I also worked in the research division of Pixar Animation Studios (2002-2004) developing new algorithms used in various award winning feature films (Monster's Inc., Finding Nemo, The Incredibles, Cars). I received a Ph.D. in Computer Science from Cornell University in 2002, working at the Program of Computer Graphics, and a Laurea degree in Physics (equiv. to BS and MS) from the University of Parma, Italy. I have received an NSF CAREER award in 2008, and an Alfred P. Sloan research fellowship in 2009.



My main research interest is the investigation of computational methods to support design of 3D objects, from their shape to their appearance, through the development of interactive rendering algorithms, intuitive design interfaces, novel fabrication methods, and user studies. This research led to various theoretical results and practical applications in the areas of interactive realistic rendering, 3d printing, user interfaces and visual perception. A comprehensive list of [publications][publications] is available.

Collaborative Design Workflows

While cloud-based computing have ushered an era of realtime collaborative tools, 3D content creation is still mostly done by artists working individually on single assets. In this project, we investigate methods that allows artists to freely collaborate on 3D content both offline, via version control, and in realtime, via collaborative interfaces.

Intuitive appearance design

Designing appearance, i.e. material and lighting parameters, is cumbersome since current interfaces require designers to specify algorithmic parameters that are only indirectly related to objects' appearance. My work investigates interfaces that allow users to effectively specify environments' final looks, and algorithms that automatically set the appearance parameters required to achieve users' goals.

Analysis and Support of Design Workflows

Content creation is the largest remaining bottleneck for a ubiquitous use of synthetic imagery. My experience in industry as well as academia is that, while many user interfaces exist, little is known objectively about how well these methods work in real-world usage and what are the current bottlenecks that are left in designers’ workflows. In this project, we are interested in understanding how different users model a variety of shapes, materials, and lighting, first individually and then in the context of entire scenes.

Physical Appearance Reproduction

Today printing methods can reproduce the color and shapes of physical objects. This project investigates how to use current printing hardware to reproduce a large variety of material appearance for opaque and translucent objects.

Efficient rendering of complex appearance

High-quality rendering of complex environments remains problematic, especially when complex materials and lighting are involved. I am interested in deriving scalable algorithms for offline rendering of complex appearance in environments with high geometric complexity. I believe these developments will shed insights on how to derive interactive scalable formulations on future machines. Some of these algorithms can also be used to provide interactive feedback, that is necessary for efficient appearance design. While for simple scenes current algorithms serves us well, I am particularly interested in guaranteeing interactivity while designing materials and lighting in the globally-illuminated high-complexity environments, which remains problematic for typical applications in architectural, cinematic and game lighting.

Perceptual metrics for rendering algorithms (not active)

The basic premise of appearance-based rendering is to avoid computing details of an image that our eyes cannot see. In the past, I have been interested in understanding how perceptual metrics can be used in interactive applications, instead of offline ones. To do so, I have investigated a decision theoretic formulation of interactive rendering, where perceptual metrics are used as priority schemes.

Non-photorealistic rendering (not active)

And finally, as most graphics people, I love artistic renditions. As a side project, I developed a generalized mosaicing algorithm that made the frontispiece for siggraph.



Leading Journal Articles: SIGGRAPH, TOG, PAMI, TVCG



Book chapters


University Courses

Conference Courses