Register

Follow-Up: Can AI Help Restore Our Faith in Photography?

By David Schonauer   Wednesday July 31, 2019


It’s a deepfake world we’re living in.

Can we get out?

Digital technology has provided photographers with a powerful arsenal of creative tools, making it easy for anyone to seamlessly alter pictures — and videos. This use of technology has undermined people’s faith in the veracity of pictures and has the potential to upend our politics.

(Will political figures use doctored videos to cast shade on their opponents? It’s already happening. See this and this.)

So worrisome is the prospect of manipulated video — known as deepfakes — that The Washington Post recently published a “fact-checker’s guide” to spotting them. “We have found three main ways video is being altered: footage taken out of context, deceptively edited or deliberately altered,” noted The Post.

Deepfakes are a big problem for Facebook, which has been busy for the past year or so trying to convince Washington and everyone else that it will not be a vector for spreading fake news. As PetaPixel noted recently, Facebook has attempted to reduce the reach of deepfakes and also to display them alongside fact-checking information.

But the job is harder that Facebook CEO Mark Zuckerberg imagined. A series of deepfake videos featuring celebrities like President Trump, Kim Kardashian, and Zuckerberg himself presented what Motherboard called a test for Facebook. CNN was also worried.



But what technology wrought, technology can make right. Hopefully. As we reported recently, scientists are trying to rein in photo fakery with new technology that might be built right into cameras. These new tools would be powered by artificial intelligence.

Researchers at New York University’s Tandon School of Engineering recently published a study — titled “Neural Imaging Pipelines - the Scourge or Hope of Forensics?” — detailing a method in which a neural network would replace the photo development process inside the camera so that the original image taken is marked with something like a digital watermark to indicate the photo’s provenance in a digital forensics analysis. See Gizmodo.

More recently came news of artificial intelligence that can tell if faces in pictures have been Photoshopped. The technology was described by researchers at Adobe and UC Berkeley in a paper titled “Detecting Photoshopped Faces by Scripting Photoshop.” In it, they explain how AI can figure out if Photoshop’s Face Aware Liquify feature was used in a photo, noted PetaPixel.

Meanwhile, the science of fakery marches on: As we noted recently, researchers at the Samsung AI Center in Mosci and the Skolkovo Institute of Science and Technology have published a paper detailing new software that can generate 3D animated heads from a still image. DIY Photography explained the breakthrough.

But wait, there’s more: PetaPixel recently reported that computer scientists at the University of Washington and Facebook — yes, Facebook — have created an AI that can animate a human subject from a single still photograph, “bringing them to life” by making them walk, run, sit, or jump out of a photo in 3D. They described their work in a paper titled “Photo Wake-Up: 3D Character Animation from a Single Photo.”

 

0 Comments

No comments yet.

Sign in to leave a comment. Don't have an account? Join Now


Pro Photo Daily