[hquinn@HenryNeeds ~]$ cat /var/www/blog/post.md
Looks like it's time for another PSA, y'all. Buckle in.
Unless you've been living under a rock since the 2012 U.S. presidential election cycle, you know that "fake news" has been running rampant. We're in a real life age of Nineteen Eighty-Four-esque doublespeak fueled by literal White House Press Secretaries and anyone savvy enough to set up a blog. The very offices and organizations that are supposed to tell us where we stand on the world stage are lying in very public forums about easily verifiable facts.
Shit's real and shit's scary.
Here's the real rub, though: it's about to get a LOT worse.
I think it's fair to say that everyone reading this is familiar with Photoshop, the graphic editing software from Adobe. Whether it be through making long cat memes or looking at airbrushed photos of models in magazines, everyone more or less knows what Photoshop can do.
One of Adobe's new products, called VoCo, is basically Photoshop for voice. This will essentially give you the ability to create sound bites of someone saying something they didn't really say, but have it sound like their voice. And it's as easy as typing.
ONE MORE TIME FOR THE PEOPLE IN THE BACK.
You can make it sound like people said something they didn't really say just by typing out some words.
"But Henry, that can't possibly be real."
Oh yeah?
"Oh, shit."
Yeah. In fact, that video was from all the way back in November of 2016. Technology has gotten a lot better since then.
Jordan Peele's production company actually used Adobe After Effects and a tool called FakeApp to make a video of Barack Obama moving his mouth to go along with and copying mannerisms from Peele's impersonation of him, reminding us to "stay woke, bitches."
"Are you serious?"
For realsies times a million. Now just imagine if you combined After Effects, FakeApp, AND VoCo. You scared yet?
The problem has actually been getting worse in 2018, with a wave of deepfake celebrity porn. People are using artificial intelligence to face-swap celebrities and porn stars. In the wake of Celebgate, where a number of celebrities were embarrassed by having their private photos leaked, there are people in the world actively putting your favorite celebrities' faces on porn stars in the middle of scenes. And due to artificial intelligence getting better and better, these videos actually look pretty real - to the point where I'm sure a good chunk of people wouldn't be able to tell the difference.
If that's not troubling enough, let's look at the implications.
People my age have been told our entire lives to be careful about what we put online. We didn't want pictures of us drinking underage (sorry, Mom) to pop up in case we wanted to run for office one day. We needed to be careful about pictures shared with significant others. We were told to keep certain opinions to ourselves to avoid "stirring the pot." Something could pop up decades down the line and compromise a goal we didn't know we would ever want to work towards. One lapse in judgement on something we uploaded willingly as an angsty teenager could completely derail a potential future career.
And forget about what other people decide to post. Depending on the platform, it can be near impossible to take down. I've definitely seen pictures online of friends, family, and exes that could put those people in compromised positions somewhere down the line.
So what happens when these tools get so simple that everyone can use them? Some really sketchy stuff, that's what.
Imagine the 2020 U.S. presidential election cycle with this kind of technology in the mix. Super PAC's can make their opponent look like they're lighting a cross on fire, sacrificing a baby goat, and whispering some words to Satan.
But let's be really really real for a minute. People would never do something that brash, right? Most other people would understandably realize that something that ridiculously outlandish would probably be fake. That's to say, getting too outside the box wouldn't work for a completely fabricated smear campaign.
/shouting: This is why I wanted to write this post.
Fabricated smear campaigns ARE going to happen, but they're going to be incredibly subtle. Just like the supposed Russian involvement (Cambridge Analytica) in our presidential election, whoever ends up taking the helm on putting words in someone else's mouth during a campaign is going to come up with some very focused quips that will affect a very small but focused group of people. They won't need everyone in the country to believe what's put in front of them. Just enough voters in key swing states to turn an election.
To put that more succinctly, people will use this technology to make enough small (but believable) mistruths to make people in key states believe a good candidate may be a poor choice.
They'll do this targeting with data. It's happened before. After all of the investigations looking into the 2016 elections, we've seen that it doesn't take much to totally derail an election. This will only make that outcome easier with a lot less effort.
"Well, that's scary as shit. Is there a way to tell if those sound/video bites are real or not?"
Currently, not really, and that's what really scares me. There are a lot of charts out there (see below) that help you kind of figure out the partisan bias and information quality of different news sources. So if you see something from Patribotics or Infowars, you know that it'll inherently have a huge bias to it, and it might not be factually accurate. Knowing where your source lies on a chart like this can help you kind of gauge how true something might be.
But in a world where any idiot can set up a WordPress blog (whaddup?) and run a "news" site, it can be difficult to know exactly who we can trust to deliver us factual information. There are people out there working on a solution, however.
There's the old way of signing posts with PGP keys. Basically you'd sign a blog post or article or Facebook update with a private key, and anyone could check using your publicly available public key to verify that it was indeed you that wrote or posted that item. However, this can be tricky to non technical people.
You could post your work on a decentralized (blockchain) application like Steemit. That way, there's a public ledger of who posted what. Instead of keeping all of the information related to the post in a closed off and private database, it would all be publicly available. If you wanted to, you could look at the blockchain and see that a certain author used their private key information to post an article or blog post to Steemit. Basically, you can be sure that a certain politician actually put out a certain video if there's some concern as to authenticity. The only problem, is that not a lot of people are using these decentralized platforms yet, and right now the whole crypto space isn't very forgiving to new or non technical users. There's still a lot of UX optimization to be done there.
So really, the only option we're left with at the moment is something we all should have learned in high school. Find and read the primary sources that are being referenced. Look at competing viewpoints. Consider possible bias in the videos you watch and articles you read. Don't take quotes from anyone, even the President of the United States, for granted. Dig into the facts. See if you can find evidence supporting the "truth" you're being told.
If we're going to make it out the other side of all of this intact, it's going to take a lot of work from everyone. People can't just watch the news and assume they're being told absolute truths anymore; they'll have to look into the evidence themselves. Folks like me who are building these technologies need to do a better job at solving the UI/UX problems we're faced with so everyone can feel comfortable using decentralized applications. And honestly, the news organizations need to do a better job at serving you the facts. We, as citizens, deserve better than what they've been providing.
But, as always, do your own research. And stay safe out there.
Made with
in New Haven || © 2023, Henry Quinn