Body
What Are Deepfakes?
The term Deepfake refers to media content (e.g., images, videos, and audio) generated using complex Artificial Intelligence. Individuals leverage deep neural networks utilizing algorithms to analyze and synthesize data to create convincing replicas of individuals, events, or situations. GANs (Generative Adversarial Networks) make realistic data by having two networks compete, and VAEs (Variational Autoencoders) make new data by learning from patterns in existing data. Leveraging GANs and VAEs, deepfakes blur the line between reality and fabrication, presenting both creative opportunities and ethical dilemmas.
How Are Deepfakes Generated?
- AI is trained on a large collection of photos, videos, or voice clips to learn how someone looks, sounds, and behaves.
- It recognizes patterns to replicate facial expressions, voice tone, and body movements.
- The more data the AI receives, the more convincing and lifelike the deepfake becomes, making it harder to detect.
How Do We Detect Deepfakes?
Keep an eye out for these signs that content may have been manipulated:
- Unnatural facial expressions or blinking – Movements may seem robotic, stiff, or out of sync with normal behavior.
- Strange lighting or shadows – Inconsistent shadows or highlights that don’t match the surroundings can be a red flag.
- Voice and lip movement don’t match – The audio may sound real, but the lips don’t move in sync with the words.
- Glitches or visual flickers – Sudden blur effects, warped frames, or audio hiccups can reveal AI manipulation.
What Can We Do To Protect Ourselves?
- Media Literacy Education: Educate yourself and others about the existence and implications of deepfake technology. Develop critical thinking skills to discern between authentic and manipulated media content.
- Verify Sources: Verify the authenticity of media sources before sharing or relying on information. Cross-reference news and media content from multiple credible sources to ensure accuracy.
- Use Trusted Platforms: Utilize trusted and secure platforms for sharing and consuming media content. Be cautious when accessing unfamiliar websites or downloading content from unverified sources.
- Enable Two-Factor Authentication (2FA): Enable two-factor authentication on your online accounts to add an extra layer of security. This helps prevent unauthorized access to your accounts, reducing the risk of data breaches.
- Regularly Update Software: Keep your operating systems, applications, and security software up to date with the latest patches and updates. This helps address known vulnerabilities and strengthens your defenses against cyber threats.
- Verify Identity: Verify the identity of individuals or organizations before sharing sensitive information or engaging in transactions online. Be wary of requests for personal or financial information from unfamiliar sources.
- Report Suspected Deepfakes: If you encounter suspected deepfake content, report it to ITS for investigation via email at ask@slu.edu or enter a ticket at ask.slu.edu. Prompt reporting can help prevent the spread of misinformation and protect others from potential harm.
- Be Cautious on Social Media: Avoid uploading too many personal videos or using voice-based apps carelessly.
- Don’t Use Untrusted AI Apps: Steer clear of random deepfake or voice-cloning apps asking for your photos or voice.
Example 1 - Hong Kong Deepfake Scam:
A deepfake scam in Hong Kong used fake avatars of company executives in a video call to trick a victim into transferring $25 million. This highlights the need for better awareness, security measures, and collaboration to prevent such fraud. Organizations should educate staff, use multi-factor authentication, deploy detection tech, and strengthen policies to prevent financial losses from deepfake scams.
Example 2 - Voice Cloning Kidnapping Scam:
In 2023, Jennifer DeStefano, a mother in Arizona, received a distressing call from what sounded like her daughter, claiming she had been kidnapped. The caller demanded ransom money, making the situation feel alarmingly real. In reality, scammers had used AI to clone her daughter's voice from a short online video. This incident highlights how easily accessible AI technology can be exploited to create convincing scams, emphasizing the need for vigilance when receiving unexpected calls (Dwoskin, 2023).