The Adobe computer software company has made strides in uncovering the façade of manipulated media using a newly developed artificial intelligence (AI) tool that can identify edited images.

In a recent , Adobe expressed concerns over the widely accessible tools for altering media, particularly technologies like Deepfake — an AI software that can simulate an individual’s likeness in video by sampling just one image.

“Fake content is a serious and increasingly pressing issue,” stated the company, which focuses on the field of media editing, from photos to video. “While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology.”  

Read also:

The same post announces new research being done in collaboration with researchers from the University of California, Berkeley, which involves the development of new machine-learning AI that can automatically detect media that has been manipulated using the Liquify tool found in Photoshop. 

The algorithm was developed by exposing a neural network to a paired-faces database of altered images and their unedited counterparts. 

The tool is said to be quite effective, with Adobe favorably comparing the success rate of its AI against human volunteers. 

While the volunteers’ success rate was 53 percent, the AI’s was an overwhelming 99 percent. The AI can even make suggestions about restoring edited media to its original form, although the results are still mixed. 

At the moment the company shows no indication it intends to commercialize the product. However, an Adobe spokesperson told   that it was just one of many “efforts across Adobe to better detect image, video, audio and document manipulations”. (ayr/kes)