State of the Art: Recovering Details in Blurry Photos ... and Plant Selfies

By David Schonauer   Wednesday November 20, 2019

You’ve heard of computational photography.

But what about bio-photography?

Recently, a group of MIT researchers reported that they have developed a way to recover lost details from images and create clear copies of motion-blurred video footage.

Their creation, an algorithm called a "visual deprojection model," is based on a “convolutional neural network.” That may sound like your brain before coffee in the morning, but this neural network was stimulated in a different way.

Researchers fed their network “low-dimensional projections” (i.e. a long exposure made by merging a video into a single image) and original high-dimensional images (i.e. the actual video). But doing so, their algorithm learned to spot and recreate the patterns it was seeing between the two, notes PetaPixel.

“When the model is used to process previously unseen low-quality images with blurred elements, it analyzes them to figure out what in the video could've produced the blur. It then synthesizes new images that combine data from both the clearer and blurry parts of a video. Say, you have footage of your yard with something moving on screen — the technology can create a version of that video that clearly shows the movement's sources,” adds Engadget.

The trained model, adds PetaPixel, was able to recreate 24 frames of a person walking, “down to the position of their legs and the person’s size as they walked toward or away from the camera.”

It gets weird, but fascinating. “Captured visual data often collapses data of multiple dimensions of time and space into one or two dimensions, called ‘projections,’” noted MIT News. “X-rays, for example, collapse three-dimensional data about anatomical structures into a flat image. Or, consider a long-exposure shot of stars moving across the sky: The stars, whose position is changing over time, appear as blurred streaks in the still shot.” The researchers found a way to recover or recreate some of this lost information, adds PetaPixel.

For now, the researchers are more focused on refining the technology for medical use. “They believe it could be used to convert 2D images like X-rays into 3D images with more information like CT scans at no additional cost — 3D scans are a lot more expensive — making it especially valuable for developing nations,” notes Engadget.

Meanwhile, scientists at ZSL London Zoo have developed the world’s first plant-powered camera system.

The system, reports DIY Photography, uses the energy from a fern (named Pete, by the way, seen at top) to power a camera, allowing the plant to take its own photo.

And we thought it was news when a monkey shot a selfie.

“Plants naturally deposit biomatter as they grow, which in turn feeds the natural bacteria present in the soil, creating energy that can be harnessed by fuel cells and used to power a wide range of vital conservation tools remotely, including sensors, monitoring platforms and camera traps,” note the researchers.

How much photo-taking energy can Pete the fern generate? Enough to snap a photo every 20 seconds.

“Pete has surpassed our expectations,” says ZSL Conservation Technology Specialist Al Davies. “He’s been working so well we’ve even accidentally photobombed him a few times!”

This bio-photography technology may one day aid conservation efforts, allowing researchers to to monitor plant growth, temperature, and other data using remote hardware without relying on solar panels and batteries. Following additional refinement, the team plans to test the technology in the wild.
At top: From the ZSL London Zoo


No comments yet.

Sign in to leave a comment. Don't have an account? Join Now

Pro Photo Daily