Dating in the world of "Deep Fakes"
But what if ordinary dating sites allowed users to see their potential date naked using advanced AI that could “virtually undress” that person? Let’s take it a step further...
Written by Zoltan Istvan.
I met my wife on Match.com 15 years ago. She didn’t have a picture on her profile, but she had written a strong description of herself. It was enough to warrant a first date, and we got married a year later.
But what if ordinary dating sites allowed users to see their potential date naked using advanced AI that could “virtually undress” that person? Let’s take it a step further. What if they gave users the option to have virtual sex with their potential date using deepfake technology, before they ever met them in person? Some of this technology is already here. And it’s prompting a lot of thorny questions – not just for dating sites but for anyone who uses the web.
Deepfakes are synthetic media created by deep learning algorithms (a form of artificial intelligence). These videos, images and audio recordings depict faces, voices or entire personas that are often indistinguishable from the real thing. They have raised concerns due to their potential to deceive and manipulate. While originally used to create humorous or satirical content, they will increasingly be misused to spread fake news and to commit fraud or blackmail. Of particular concern is how they might affect personal relationships.
Deepfakes are crafted by training AI models on extensive datasets of the target person's visual and auditory data. These models can generate highly convincing content by synthesizing new data based on the patterns it learns during the training process. Efforts to combat deepfakes include developing detection algorithms and promoting media literacy to help the public tell the difference between authentic and manipulated content.
Let’s use the Socratic method to explore how dark this technology could actually get. Is it acceptable to view your best friend’s spouse naked without either of their permissions? Do I want one of my transhumanist fans taking one a photo of me from the internet and then undressing me? What about my legions of haters: do I want them to create deepfake videos of me being strangled to death? What if they create images and videos of me strangling someone else?
In the modern world of cancel culture, many people have a “shoot first, ask questions later” mentality. Which means that deepfakes could prove incredibly dangerous. The polarized political climate in which we live has left many people ready to believe random memes of unknown origin – just because they suffer from confirmation bias. Clearly, our personal relationships are in trouble if anything can be fraudulently created about someone, and people simply don’t have the gumption to question whether it’s true.
Incidentally, I don’t want to discount the positive sides of AI and deepfakes. In fact, there are many pluses when it comes to personal relationships. I like the idea that couples can use their photos to see what their future babies will look like. I also like the idea that one can see what their partner will look like as a senior citizen. These are useful tools for deciding how one might feel about someone in the future, and whether one should still pursue them in the present. From an anthropological point of view, such tools are transformative in that they greatly increase choice for consumers.
Things get a little creepier when it comes to deepfakes and the dead. There are already companies working on bringing back dead loved ones virtually. AI can mimic their voices and idiosyncrasies, as well as the topics they like to talk about. In my own case, I’ve done hundreds of interviews, including with Joe Rogan, so AI could easily recreate my conversation style and many of my talking points. Should I die tomorrow, my kids could continue having conversations with me. And avatars could be made to look just like me. In fact, the avatar change clothes and be in a different geographical location every time it talked to my kids. Honestly, as a longtime journalist and lecturer on transhumanism who often travels internationally, this wouldn’t be all that different from what my kids are used to.
Furthermore, many more people are becoming interested in the metaverse from the perspective of their legacy. For the first time in history, everyone has the opportunity to leave a lasting footprint of themselves, albeit digitally. I think the online experience is much more universal than anything that came before. It's almost like breathing air – everyone can do it. The experience of living and dying online crosses generations and religious faiths.
One prediction I have is that more people will end up spending time in the metaverse, even if deepfakes and imposters lurk there. I think the real breakthrough will come when we have access to devices that immediately put us in a virtual and augmented reality, without needing to wear a piece of technology. Companies are already rushing to develop such devices, some of them based on holograms.
For example, if we are dying, we may have virtual dead family members there to comfort us; they could even be programed to welcome us to die and join them. In the not-too-distant future, I assume Alexa, Siri, and other AI devices will be able to beam images and digital people around us at every moment. Virtual beings will watch over us as we sleep. They will go on hikes with us. They will discuss philosophy with us. They will be our personal assistants and best friends, always looking over our shoulders.
Of course, such technology could be weaponized with evil intent. Should pedophiles to have access to technology that would allow them to do what they wish with AI-created virtual children? Worse, they might use deepfakes based on real images and videos of existing people, such as the children of celebrities or politicians.
Upholding liberty has always been challenging due to a minority of bad actors. However, I’m confident that big tech (with the insistence of government) will soon come up with ways to protect virtual people, especially those that resemble real people. One way they already do this is verification – requiring users to submit images of themselves for facial recognition, which the AI knows can only ever belong to one person. Individuals can also be notified if an image of them emerges online, asking whether they give their consent for it to be used.
Unfortunately, this only works for facial recognition at the moment. People’s bodies can still be replicated. For example, we can sometimes recognize the bodies of celebrities like Dolly Parton or actor Dwayne Johnson, 'The Rock', without seeing their faces. What’s more, some people look alike and AI will inevitably make mistakes. Even a one percent error rate in a country as large as the United States could mean hundreds of thousands of people being deprived of their privacy and subjected to harmful deepfakes.
Another important way to deal with deepfakes and other deceptive content online is to simply be aware of it – to always be on guard. Of course, this is a lot of work. And the radical changes caused by so much transformative technology are already tiring and off-putting to many. I have numerous friends who will no longer use Facebook for social communication, Tinder for dating, eBay for selling. They are done dealing with the negative aspects of technology.
But deepfakes and AI-created content don’t have to be seen as something exclusively negative. Deepfakes of Russian politicians performing unsavory acts could help Democratic societies see the value in fighting for liberty. Like the fiction our society has consumed for decades via books and movies, deepfakes may help us understand the changing nature of the world we live in. They also serve up plenty of humor, something we all could use a little more of in these hyper-sensitive times.
Like all new technology, deepfakes present both opportunities and threats – especially where our personal relationships are concerned. We’ll certainly need much better laws for protecting the privacy of children and anyone who doesn’t want images and videos of themselves being manipulated on the internet. But deepfakes could also be a valuable new tool that brings humor, creativity and understanding. We might discover more about each other with much less effort than before. For personal relationships and online dating, this could ultimately be a win.
Zoltan Istvan began his career at National Geographic as a journalist. Later he penned The Transhumanist Wager, a novel that launched the activist side of the transhumanism movement. He is the founder of the Transhumanist Party and the creator of the Transhumanist Bill of Rights, now a crowdsourced document.
Consider supporting us with a $6.99 monthly subscription and following us on Twitter.
Read more from Zoltan Istvan:
"But deepfakes and AI-created content don’t have to be seen as something exclusively negative. Deepfakes of Russian politicians performing unsavory acts could help Democratic societies see the value in fighting for liberty."
So you're saying that deepfakes may prove useful in fabricating propaganda claims.
In other words, lies.
So you think "requiring users to submit images of themselves for facial recognition" to "big tech (with the insistence of government)" is a way to protect people? You're "confident" of that?
Is this satire?