top of page

EMAIL ADDRESS

14622 Ventura Blvd Ste 2047

Sherman Oaks, CA 91403

MAILING ADDRESS

Toll Free: 877-3GC-GROUP

Phone: 213-632-0155

PHONE NUMBER

Contact Us

3GC POST

Karl Aguilar

The Danger of Deepfakes



Forgeries and fakes have been a constant problem throughout history. It does not help that with the current technological advances that would have been able to detect fakes more accurately, these technologies also paved the way to more complex forgeries that challenge even the most powerful fake detection tools.


One particular deception-building technology has been used to great effect and has managed to fool many people: deepfake.


Defining deepfake


As the name implies, deepfake technology involves more than just simple mimicry of the person being impersonated. Instead, it mainly relies on artificial intelligence that gathers countless data related to the person being impersonated, from physical appearance to mannerisms to speech pattern, creating an image that looks and sounds almost indiscernible from the actual person.


Deepfakes are not inherently bad. They are utilized somewhat extensively, especially in entertainment wherein they could bring a deceased actor to life to perform a particular role. However, the legal and moral lines are blurred when deepfakes are being used for unsavory activities.


At the moment, deepfakes still aren't very good. The gestures aren't synchronized, or the person's speech just sounds less natural. But it might not be too long before deepfakes are a lot more convincing — and possibly a greater threat from a cybersecurity perspective.


Two worrying trends


Compounding the deepfake threat are two developments that can potentially make them more dangerous than before. One is the ongoing development to create AI that is capable of showing emotion. Among other concerns, the thought of deepfakes conveying more human-like emotion to engage in deception has been a major sticking point. However, while the likes of Google and Microsoft acknowledge the threats posed by emotion AI, they are still keen on using it for “research” purposes, a decision disapproved by many.


But perhaps the most sinister of these potential developments is the ability to create malware that would let cybercriminals automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them, and also could remove real cancerous nodules and lesions without detection. This makes the likelihood of misdiagnosis and a failure to treat patients who need critical and timely care more worryingly real than ever.


"Real-time deepfakes are the biggest threat on the horizon," commented Yisroel Mirsky, head of Ben Gurion University's Offensive AI Research Lab.


Fighting back against deepfakes


To combat the evolving deepfake threat, some organizations have turned to companies that happen to create “ethical” deepfake content to help them detect possible deepfake content. Some companies are also planning to release to the public a web-based deepfake detection tool that allows individual consumers or enterprises to upload content and receive a report explaining whether the content is falsified, the algorithm that was initially used to create the deepfake, and explanations as to how the company came to such conclusion.


Ultimately, the public would have the final say whether to reject the deepfake content or still accept it as gospel truth despite the overwhelming evidence to the contrary. And such is beyond the control of any deepfake detection technology.


Given such challenges, education and vigilance remain the key tools in the defense against deception by deepfake content so people are made aware of possible deceptive tactics made through deepfakes and reduce being led into incorrect ideas that can be detrimental to them.



Comments


bottom of page