If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?
It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)
I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.
How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?
I’m not a tech person, so I’ll take the lowest hanging fruit. The obvious answer is to write a program that can detect AI. Then there will be a competition between AI fakes and AI detection. This is similar to what we have in sports. There are forbidden enhancement procedures (e.g., steroids, blood doping, etc.) that have to keep improving in subtlety so not to be detected by Anti cheating measures.
That’s essentially how Generative adversarial networks work, and the effect is that the generative program gets better at making its fakes be undetectable