A deep fake is truly the next generation of video and audio manipulation. They are formed by a computer AI system plotting dots on a face, studying it, and transferring it onto a new face. In my opinion, deep fakes will grow to be extremely dangerous, especially with the developing technology. Supasorn Suwajanakorn says in his TED talk that pretty soon, deep fakes could even be considered unrecognizable. This is threatening because they can easily be used for blackmail, to ruin someone’s reputation through misinterpretation, or to mislead the public in every way. However, I will say that filmmakers and editors have a leg up with deep fakes and can do amazing things in the entertainment world.
Overall, deep fakes concern me, but there are a couple of giveaways. We should search for voice accuracy, video quality, and visual clues such as blurring, skin tone, or unusual shadows. Suwajanakorn also speaks about the new detection program he is working on called Reality Defender. Still, until then, we can help by asking ourselves if the source is reliable, if the video seems out of character, or by doing our own separate research.