Connect with us

AI Program Turns Sketches into Faces

Credit: Hongbo Fu via YouTube

Computers can not only recognize faces, now they can make them.

It feels like every day, computers get closer and closer to attaining a perfect understanding of the human face, to a mildly creepy extent. Kinda reminds me of that episode of Are You Afraid of the Dark with that witch lady stealing peoples’ faces. Still, no matter how creepy it is, you can’t deny its impressiveness. Not only can an AI identify the various contours of the human face, but thanks to a new program, it can now create an entirely new face with nothing to go on but a face-shaped doodle.

Researchers from the Chinese Academy of Sciences in Beijing have developed DeepFaceDrawing, an AI program with the ability to parse simple sketches of faces and transform them into complete images of human faces. According to the accompanying paper, the purpose of the program is to aid “users with little training in drawing to produce high-quality images from rough or even incomplete freehand sketches.”

Unlike previous programs with similar ideas, DeepFaceDrawing builds its faces with a combination of database pulls and probability. Different parts of the face are parsed separately, including eyes, nose, mouth, hair, and more, and then assembled as a single image.

Credit: Hongbo Fu via YouTube

The paper explains, “Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.”

The only consistent snag the program has run into is one of race; most of the generated faces are Caucasian and South American, which researchers have theorized could be due to either a lack of source data or the complexity of certain facial types. They’ll be looking for feedback at this year’s SIGGRAPH conference in July (held remotely, of course). Once the program is perfected, the researchers are considering letting the source code out into the wild for others to tinker with.