eFacsimile project focuses on high-quality Artwork acquisition and reproduction for modern digital devices such as laptops, smartphones and tablets. By faithfully capturing and representing digitally all the various features that compose a visual artwork (e.g. colors, textures, light reflectance), the viewing experience can be greatly enhanced. Combined with an ergonomic interface, the user can then interact and actively manipulate the digital replica in a completely new way. This project is generously sponsored by Google Inc and is developed in close cooperation with senior researchers at Google in Zurich.
Visual Artworks provide a powerful way of expression and can transcend their status of mere physical objects by revealing emotions to their viewers. Recent studies actually showed that works of Art activate the same neural pathways in the brain that are activated during human interactions (love, anger, sadness). In his recent work*, neuropsychiatrist, and Nobel Prize recipient, Eric Kandel describes how the colors, texture and contrast of a painting like Judith Beheading Holofernes (by G. Klimt, right) push the brain to release endorphin and serotonin to create emotional tension that is exciting to the viewers.
Besides its symbolic content, the strength of an Artwork therefore also resides in specific attributes skillfully mastered by the artist, e.g. the brush strokes of an oil painting provide 3D relief that accentuate its expressiveness; and the use of metallic paints and varnish increases the specularity reflections, intensifying the resulting contrast. Capturing and displaying correctly these various attributes is thus crucial for reproducing accurately an Artwork digitally. It becomes clear that a static single image is not appropriate to this end.
The project investigates the use of various camera technologies (e.g. high-resolution, multi-spectral or light-field) and novel illumination solutions to accurately record such features. Recent advances in signal processing and sampling theory are also explored to achieve higher resolving power and to efficiently store such dataset for rendering on displays.
* Kandel, E. R. (2012), The Age of Insight: The Quest to Understand the Unconscious in Art, Mind, and Brain, from Vienna 1900 to the Present , New York: Random House, ISBN 978-1-4000-6871-5
ART MYN is born in 2015 from the technology and knowledge developped from the eFacsimile project at LCAV/EPFL. ARTMYN develops new solutions for digitizing and visualizing Artworks on digital displays. Follow us at: www.artmyn.com
Team members & Research
Oil painting are essentially planar objects. However, the texture at the surface of the painting, created by the brush strokes and the grain of the canvas, plays a major role in the viewing experience of a painting. In addition to this, the different layers of paints with various dyes, varnishes and resins generate complex light reflections that are unique to each original painting.
In this project, we are investigating new advances in Computational Photography, Sparse Sampling and Light-field Processing as an attractive framework to study and characterize Visual Artworks for digital reproduction. Recently arrived on the consumer market, light-field cameras can capture more information in a single exposure than traditional cameras, providing new ways for faster acquisition. Besides, new methods are needed to both process and compress the large amount of information generated when a complete characterization of a visual artworks is made.
A special class of artworks are woodcuts. Woodcut images have a simple characterization as piecewise continuous functions with discontinuities along smooth curves. The simplicity of this type of images have made them a good target for developing representation domains that can efficiently represent the images in this class. Therefore, we expect to reconstruct high resolution woodcut images if we represent them in these domains. But how can we calculate the coefficients of such a representation from a digital image?
The digital imaging process can be modeled as projecting a continuous- space image onto a finite-resolution sampling subspace. The resolution of this subspace is determined by physics of the camera and it may be very limiting for some applications. In these cases, we can improve the image resolution by representing the image in another domain that is well-adapted to a given class of images. This means that the coefficients of such a representation are sparse or decay very fast. In the case of sparse coefficients, we can get an infinite-resolution representation of the image from only a few non-zero coefficients. However, this requires a technique to approximate the non-zero coefficients in the new domain from pixel values.
When the reconstruction and sampling subspaces (domains) make a small angle with each other (a notion that measures similarity between two subspaces), we can use the generalized sampling (GS) approach to approximate the coefficients in the new domain. GS is a linear method that gives a stable approximation of the coefficients in a fixed and finite-dimensional subspace of the representation domain.
In order to reconstruct an infinite-resolution image, however, we require a nonlinear method which can discover the subspace of the representation domain that contains the image (or equivalently, the support of non-zero coefficients). A possible solution to this problem is to combine the sparse recovery techniques in compressive sensing with GS. In my research, I address this problem to recover sparse and compressible signal representations from its samples in an arbitrary sampling space.
Interactive relighting of an arbitrary scene has a lot of potential applications in a world with smart displays. The relighting problem itself is completely determined by an 8D – bidirectional scattering distribution function (BSDF). Traditional approaches have concentrated on recovering the angular subspace : the bidirectional reflectance / transmittance distribution functions (BRDF or BTDF). The significance of the spatial subspace spanned by the BSDF, has resulted in the evolution of alpha matting into environment matting in the computer graphics society. These techniques, have converged on what is now known as the light transport matrix.
The light transport matrix (LTM) is powerful in the way that it can represent in a very efficient way scatterings, the underlying phenomenon in both reflection and refraction in materials of the real world. My research focus is on LTM recovery under ill-posed conditions and uncontrolled environment.
The multi-view acquisition of an object under different illumination can be achieved with brute-force approach by capturing the full light-field with all possible lighting conditions, which requires dense samples and huge storage of dataset. It can also be generated by much fewer image samples with a parametric reflectance model (Polynomial Texture Maps) for one fixed view (as shown in our video below). Less samples and parametric modelling usually come with artifacts when dealing with non-convex, non-Lambertian surfaces.
In our research, we are working on modeling the realistic reflectance functions with sparse samples by using redundancy in textures, painting materials and geometry structures. The lighting and viewing conditions play important roles when appreciating an artwork, such as oil painting, woodcut, and etc. As a member of the eFacsimile project, we are especially interested in image acquisition for interactive rendering under varying lighting and viewing conditions. When the object to be acquired is mostly flat like a painting, the simplified problem is often referred as a two-and-half dimension problem.
We are also working on synthetic relighting for oil painting with one input image, by using coupled dictionary training on image texture and reflectance model. We aim at providing rendering atoms of oil painting containing information from texture, geometry and reflectance function.