Scroll Top

The Changing Face of Digital Evidence

The Changing Face of Digital Evidence

Last month, Data Narro traveled to Chicago to participate in a panel discussion about the alteration of digital evidence, something that we deal with on a daily basis. After all, we are in the business of digital forensics and frequently deal in cases that involve fraud. People that perpetrate fraud tend to cover their (digital) tracks.

The most common type of evidence tampering that we see involves the alteration of file timestamps, as fraudsters attempt to hide data or rewrite the timeline of their actions. Other times, we find altered or fabricated emails. Both of these manipulations are easy to pull off, but are relatively trivial for a trained forensic investigator to detect.

When our discussion turned to future threats, we talked about a class of information manipulation that calls into question the very nature of digital evidence. We discussed the emergence of “deep fakes.”

Never before has it been so easy to alter photos and videos. The arrival of free, open source digital tools has allowed the proliferation of synthetic videos that blend real footage with AI-generated video and sound to create disturbingly realistic altered video. Thanks to a series of high-profile videos released in the past few weeks, these “deep fake” videos are again in the spotlight.

In May, a controversial video of House Speaker Nancy Pelosi began circulating on Facebook. In the video, Pelosi appears to be slurring her words as if she were drunk. In reality, the audio and video had been manipulated to paint an unflattering portrait of the political figure. While the video was quickly debunked and widely acknowledged to be fake, Facebook further stoked controversy by leaving the video on its platform whereas others (such as YouTube) had taken it down.

(See the analysis of the Nancy Pelosi video here.)

In response, artists Bill Posters and Daniel Howe created a convincing deep fake of Mark Zuckerberg and posted it on Instagram, (which is owned by Facebook) testing the platform’s tolerance for manufactured or misleading content. (see below)

Through analysis, it appears easy to determine that these videos are manipulated — the voice acting is not perfect and the movements of the speaker don’t perfectly coalesce. Still, casual viewers are being fooled.

Don’t think it can happen to you? Take a look at the two images below. One of these images is a real person and one of these images is computer generated. Tell me which one is real?

Examine these two photos. One is a real human, the other is computer generated. Can you determine which is which?
In the image above, one of these photos depicts a real human, the other is an AI-generated image. Can you guess which is which? Image courtesy of WhichFaceIsReal.com.

The answer? The image on the right is a real human. The image on the left is fake. Maybe you got this one right, but spend some time on the WhichFaceIsReal.com website and you are bound to get tripped up. This website uses a freely available AI-enabled software called StyleGAN, whose sole purpose is to generate fake human photographs.

It’s important to point out that while deep fake technology is still in its infancy, it is rapidly advancing. In a new demonstration of deep fake technology, researchers showcased software that allows users to edit the transcript of a video, allowing users the ability to add, delete, or change the actual video itself. It won’t be long before these tools mature, allowing anyone the ability to create convincing deep fakes right from their smartphone.

What will our future look like if none of our photographs or videos can be trusted? While we will likely suffer through a parade of public deep fakes that seek to discredit politicians and celebrities, we will also see this technology affect us in more personal ways. What happens when questionable video evidence is submitted in criminal or civil proceedings that touch our lives — in divorce cases, small claims court, or civil litigation.

Already, deep fake detection tools are being released, and already researchers are adjusting algorithms to defeat them. We fully expect a game of cat-and-mouse over the next few years, but technology may evolve to a point that makes it difficult, perhaps impossible, to definitely identify altered videos.

In a world of deep fakes and fake news, what can we believe anymore? Will trust in digital media erode so much that we will need to rely on professionals to verify the authenticity of run-of-the-mill digital evidence?

At Data Narro, we are watching these developments closely and staying aware of the technology that affect the world of digital evidence. Stay tuned to the Data Narro blog for regular updates!