Amid debates about how artificial intelligence will affect jobs, the economy, politics and our shared reality, one thing is clear: AI-generated content is here.
Chances are you've already encountered content created by generative AI software, which can produce realistic-seeming text, images, audio and video.
So what do you need to know about sorting fact from AI fiction? And how can we think about using AI responsibly?
Thanks to image generators like OpenAI's DALL-E2, Midjourney and Stable Diffusion, AI-generated images are more realistic and more available than ever. And technology to create videos out of whole cloth is rapidly improving, too.
The current wave of fake images isn't perfect, however, especially when it comes to depicting people. Generators can struggle with creating realistic hands, teeth and accessories like glasses and jewelry. If an image includes multiple people, there may be even more irregularities.
Take the synthetic image of the Pope wearing a stylish puffy coat that recently went viral. If you look closer, his fingers don't seem to actually be grasping the coffee cup he appears to be holding. The rim of his eyeglasses is distorted.
Another set of viral fake photos purportedly showed former President Donald Trump getting arrested. In some images, hands were bizarre and faces in the background were strangely blurred.
Synthetic videos have their own oddities, like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make.
Some tools try to detect AI-generated content, but they are not always reliable.
Experts caution against relying too heavily on these kinds of tells. The newest version of Midjourney, for example, is much better at rendering hands. The absence of blinking used to be a signal a video might be computer-generated, but that is no longer the case.
"The problem is we've started to cultivate an idea that you can spot these AI-generated images by these little clues. And the clues don't last," says Sam Gregory of the nonprofit Witness, which helps people use video and technology to protect human rights.
Gregory says it can be counterproductive to spend too long trying to analyze an image unless you're trained in digital forensics. And too much skepticism can backfire — giving bad actors the opportunity to discredit real images and video as fake.
Instead of going down a rabbit hole of trying to examine images pixel-by-pixel, experts recommend zooming out, using tried-and-true techniques of media literacy.
One model, created by research scientist Mike Caufield, is called SIFT. That stands for four steps: Stop. Investigate the source. Find better coverage. Trace the original context.
The overall idea is to slow down and consider what you're looking at — especially pictures, posts, or claims that trigger your emotions.
"Something seems too good to be true or too funny to believe or too confirming of your existing biases," says Gregory. "People want to lean into their belief that something is real, that their belief is confirmed about a particular piece of media."
A good first step is to look for other coverage of the same topic. If it's an image or video of an event — say a politician speaking — are there other photos from the same event?
Does the location look accurate? Fake photos of a non-existent explosion at the Pentagon went viral and sparked a brief dip in the stock market. But the building depicted didn't actually resemble the Pentagon.
Google recently announced it's making it easier to see when a photo first appeared online, which could help identify AI-generated pictures as well as photos that are shared with misleading or false context — like that viral image of a shark swimming on a flooded highway that often appears after hurricanes.
Pause and think in other situations, too. Scammers have begun using spoofed audio to scam people by impersonating family members in distress. The Federal Trade Commission has issued a consumer alert and urged vigilance. It suggests if you get a call from a friend or relative asking for money, call the person back at a known number to verify it's really them.
AI images aren't the only way you might be fooled by a computer. Chatbots like OpenAI's ChatGPT, Microsoft's Bing and Google's Bard are really good at producing text that sounds highly plausible. But that doesn't mean what they tell you is true or accurate.
That's because they're trained on massive amounts of text to find statistical relationships between words. They use that information to create everything from recipes to political speeches to computer code.
While the text chatbots spit out may sound convincingly human, they do not learn, think, or create in the ways we do, says Gary Marcus, a cognitive scientist and professor emeritus at New York University.
"They don't have models of the world. They don't reason. They don't know what facts are. They're not built for that," he says. "They're basically autocomplete on steroids. They predict what words would be plausible in some context, and plausible is not the same as true."
ChatGPT fabricated a damaging allegation of sexual harassment against a law professor. It's made up a story my colleague Geoff Brumfiel, an editor and correspondent on NPR's science desk, never wrote. Bing invented quotes from a Pentagon spokesman. Bard made a factual error during its high-profile launch that sent Google's parent company's shares plummeting.
That means you should double-check anything a chatbot tells you — even if it comes footnoted with sources, as Google's Bard and Microsoft's Bing do. Make sure the links they cite are real and actually support the information the chatbot provides.
In its early phase, AI can be unreliable and even risky. But it's also fun and interesting to experiment with. And like it or not, generative AI tools are being integrated into all kinds of software, from email and search to Google Docs, Microsoft Office, Zoom, Expedia, and Snapchat.
Playing around with chatbots and image generators is a good way to learn more about how the technology works and what it can and can't do.
"My main piece of advice to everybody is, do use this stuff," says Ethan Mollick, a professor at the University of Pennsylvania's Wharton School. "You absolutely should be making things. You should absolutely spend an hour on ChatGPT...You should try and automate your job."
Mollick requires his students to use AI. And while he's an enthusiastic user of chatbots and other forms of AI, he's also wary of the ways they can be misused.
"You've got to figure this thing out because we're in a world where there's nobody with great advice right now. There isn't like a manual out there that you can read," Mollick says.
If you are going to experiment with generative AI, here are a few things to keep in mind.
"You can think of it as like an infinitely helpful intern with access to all of human knowledge who makes stuff up every once in a while," Mollick says.
The audio portion of this episode was produced by Thomas Lu and edited by Brett Neely and Meghan Keane.
We'd love to hear from you. Email us at [email protected]. Listen to Life Kit on Apple Podcasts and Spotify, or sign up for our newsletter.
2024-12-26 00:54115 view
2024-12-26 00:33330 view
2024-12-26 00:332836 view
2024-12-26 00:30946 view
2024-12-26 00:102166 view
2024-12-25 23:561720 view
The 21-year-old woman who died in a motorcycle crash in Johor on Tuesday (Dec 10) had just surprised
The conversation with friends on 2024 Grammys night is that the title of Taylor Swift's new album ha
Steve Belichick is heading to the college football ranks.According to multiple reports, the former N