© 2024 South Carolina Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Deepfakes: How One Reporter Fared Trying to Outthink Misininformation

Zoom still

I’m sitting in my office in Rock Hill, talking on Zoom with John Sohrawardi in Rochester, N.Y. Sohrawardi, a research student at the Rochester Institute of Technology, is running me through a game – it turns out I’m the first reporter to do so – to test how well I would vet something that, if true, would be a bombshell.

This is research for the DeFake project – a joint RIT/University of South Carolina project to develop an advanced, for-journalists-only deepfake detection software program.

While a proto version of DeFake should be available to select newsrooms as a tool to help to keep an eye on this election season, the full program won’t be ready until probably this time next year. To get it fully ready to help vet potentially altered or staged videos in real time, the researchers first need to learn how reporters and editors think their way through scenarios that can charitably be described as “Oh, man!”

Sohrawardi, one of the key engineers of the DeFake software, looks every inch the gamer on this Zoom chat, right down to the headset with the mic propped just outside the plosive range of his mouth and nose. He hits me with the first of three situations to learn what I would do.

“You're presented with a scenario,” says the young man who designed this game to be like Dungeons & Dragons, where each action taken yields consequences. “Dylan, a team member from a news gathering team, points out this tweet from Elon Musk and suggest that it could be an interesting story to go on.”

The scenario is, I’m a big-deal editor at a large news agency who’s just been shown by a trusted colleague a tweet that the Tesla CEO is stepping down, to be replaced by another man, immediately.

What do I do with this information?

Before I answer, allow me to explain why I’m doing the absolutely unthinkable and putting myself in the center of a story. I don’t like doing it, I assure you, and I certainly don’t think you or anyone else should construe that I’m trying to position myself as a representative example of all journalists, nor any news agency (including my own).

Rather, I’m doing this to let you, valued reader and listener, into the process of how a reporter does his job.

At the risk of editorializing, journalism has a bad design flaw in that we who practice it don’t usually let the public we’re here to serve see what we do and how we think things through. That disconnect leads to mistrust because we inadvertently present ourselves as a faceless institution that, like government or police or science, asks you to just believe in us because we know what we’re doing for your own good.

But too often, that leaves the public to conjure its own ideas about what we’re really up to – and let’s just say, the ideas people come up with about us can be … creative.

I do hate to dampen anyone’s creative brainstorming, but the unsexy truth about journalism is that what we’re really up to is trying very hard to serve the public good and uphold the inviolably sacred public trust.

But when people only see results and not process, they can (and far too often do) conclude that we journalists are trying to invent and push our own narrative reality. To what end, I’ve never been entirely sure, I confess.

But please trust me when I say (and may my colleagues in the press forgive me for this), we’re not smart enough to pull that off. We’re also not that coordinated. It’s hard enough to get three people from the same newsroom to do a joint project on a single subject or theme. We definitely are not going to be able to engineer global hegemony in conjunction with other newsrooms.

So by making my own interactions with Sohrawardi’s game the focal point of this story, I’m at least attempting to give you the chance to see how a lot of reporters make decisions about what we will or won’t do with information we find.

So … what would I do about that tweet from Elon Musk?

“Well, I think the first thing I would do is make sure that came from Elon Musk’s real account,” I tell Sohrawardi. “That definitely needs to be checked out before sharing or addressing it in some way.”

But what’s this? It actually did come from Elon Musk’s verified Twitter account?

Sohrawardi gives me a set of options like, do I try to contact Musk or Tesla? Do I check the Twitter account of Tesla President Jerome Guillen, the man Musk said would be taking over the whole company? Do I search Twitter for context?

I opt to call Elon Musk but I don’t have his number. I opt for calling Tesla’s front office, but can’t reach the information person.

I know a guy who works there (in this game, not in real life, where I don’t even know anyone who’s even test driven a Tesla), so I call him. I’m told he doesn’t know about stuff like C-suite moves, but Guillen would be a good leader for the company if this is all true.

These outcomes, by the way, are the consequences I mentioned earlier – every action leads to something else we need to snake our way through. And while I’m looking at this increasingly eroding set of options, three blue boxes hover at the bottom of the screen: Publish story as fake; Publish story as real; Do not publish anything.

We check Guillen’s Twitter account and see that there’s only one tweet ever sent from it – saying that he’ll be taking over at Tesla.

I decide not to run anything because there’s something fishy about this. I admit, it’s easier to be suspicious in a game like this, where I know I’m going to be thrown a steady diet of sliders, but even so, I don’t know enough yet, and the fact that Guillen has but a single tweet makes me wonder.

(I find out later that the Guillen account was indeed a phony, but let’s not get ahead of ourselves.)

So no. No story.

But there’s some kind of video attached to this tweet and thanks to it, the announcement has already racked up 3,000 views in 15 minutes. People are seeing it and they’re reading that Musk is stepping down. It’s out there. And the longer I delay in addressing it, the more hits this thing gets and the more potential misinformation spreads.

I opt to say something on Twitter, but not write a full story yet; something saying this is an unverified story and we’re looking into it.

“It would be like, look, we haven’t verified this,” I say. “If you’re seeing reports about this, it might be ….”

I don’t finish the sentence, but I’m strongly implying the word “fake.”

I also tell Sohrawardi that I’d scout around on other news websites to see if anyone’s found anything out or is reporting on it.

“In this case,” he says, “you are the large organization that people are going to look out for.”

I’m chewing my lip as I reiterate that I’d still not have a fully written story yet. But I also reiterate that I’d say something on Twitter about it being unverified, because a lot of people are already passing this around.

“So do not publish anything, I'm guessing,” Sohrawardi says.

“Yeah, I … certainly wouldn't publish the story that he was stepping down.,” is my answer.

“Retweets keep piling up,” is the consequence. But by now, “Elon Musk released a tweet that says that account was hacked; Tesla stocks fell 5 percent.”

Given the company is worth billions, 5 percent is quite a chunk of change to lose.

But do I believe the new tweet? From the same account? The original tweet announcing Musk’s departure is gone.

I still haven’t found a human being at the company who can assure me of anything. And yet, the company is facing some real-world fallout on what is now being attacked as a rumor.

The actual words coming out of my mouth as I work my way through this are a lot of “like, you know, like” stuff. But without a person, a real, verifiable source with a name, I’m still not ready to publish anything other than a tweet that says, to the effect, “Chill, we’re looking into this.”

I shake my head. “You did not make this easy, did you?”

“I’m trying to push you,” Sohrawardi replies.

“You’re doing it.”

I tell him that if I can truly verify that Musk is the one who pulled the original tweet and said he was hacked, I would run a story about that.

“OK, so you’ll publish an article about this?”

I relent and say yes.

“His account was hacked,” Sohrawardi says. Which means I guessed right.

But I don’t feel great about a good guess. Yes, all signs point to a hacked account, but I still haven’t talked to Elon Musk, or anyone at Tesla in person.

I feel like I need a shower. I try to pacify myself by reminding myself that this is just a game, and one designed to always have a wrong outcome somehow.

I also try to console myself with the fact that in the two scenarios that followed the one above – one involving a potentially fake video by Dr. Anthony Fauci about COVID and one with a potentially fake video from Democratic Presidential Candidate Joe Biden’s former campaign manager – I did better. I used the DeFake tool, which told me the videos were probably phonies, but moreover, I trusted my gut that the circumstances surrounding the videos just seemed off.

There was still a lot of fallout in those subsequent scenarios. Thousands of people still shared what turned out to be misinformation. There were still no situations where, once misinformation was out there, it stayed contained and manageable. That’s not how information works in an age when everyone has the floor at the same time and not all of us bite their lips over how to handle what to do with a potentially phony news story.

Although I’m the first actual reporter Sohrawardi has run through this game, he has spoken to several reporters, including some high-profile national beat reporters, who gave him insight on those consequences. So yes, the deck was stacked, but I left our time together having given Sohrawardi something in return – he might start adjusting how he delivers these game questions so that the journalists he tests in the months ahead will not be so quick to think they’re seeing red flags.

But already, Sohrawardi has found that journalists he’s spoken with at every level approach stories the way I did – they trust their guts. They overwhelmingly want to talk to real people before they make any decisions. But they do want good tools that help them make those decisions..

And ultimately, that’s what DeFake is trying to be – a tool that supplements the journalist’s gut feeling and complements old-fashioned legwork, but doesn’t replace them. Because even an AI-driven program that analyzes videos for the teeny-tiniest of clues that it might have been doctored shouldn’t be left to make decisions about what to do with that information. Or disinformation.

What I left the game with was a mix of gratitude that someone is looking this deeply into things and dread that I could be not just fooled by a convincing bogus video, but pushed to act because of the consequences of my inaction.

That’s something a clever troll could learn to exploit. And it’s part of the arms race against misinformation that the RIT/USC team is fighting.

This story is part of a series on a USC/Rochester Institute of Technology project to develop an AI-driven deepfake detection software for journalists.

Click here for the story on the DeFake project.

Click here to learn how journalists of tomorrow think about deepfakes and public trust.

Scott Morgan is the Upstate Multimedia Reporter for South Carolina Public Radio. Follow Scott on Twitter, @ByScottMorgan, where he will never say he’s stepping down from the helm of Tesla Motors.

Follow South Carolina Public Radio on Twitter @SCPublicRadio, and on Facebook.

Scott Morgan is the Upstate multimedia reporter for South Carolina Public Radio, based in Rock Hill. He cut his teeth as a newspaper reporter and editor in New Jersey before finding a home in public radio in Texas. Scott joined South Carolina Public Radio in March of 2019. His work has appeared in numerous national and regional publications as well as on NPR and MSNBC. He's won numerous state, regional, and national awards for his work including a national Edward R. Murrow.