This has become a basic question users need to ask as deepfake technology becomes more widespread.
Deepfakes are defined as “digital forgeries” that can convincingly mimic a person’s voice or appearance. As the tools become more accessible and easier to use, they are increasingly being used to spread false information at scale.
Global technology company iProov, which specializes in biometric identity verification, recently conducted a consumer study on what it describes as a “trust recession.”
The findings show that nearly half of consumers now question the authenticity of almost everything they see online, largely due to increasingly realistic deepfakes.
From misinformation to corporate threats
Beyond online misinformation, attacks are also targeting organizations directly. These include attempts to infiltrate hiring processes or impersonate employees to gain internal access.
Once inside, attackers can operate with the same permissions as legitimate staff, allowing access to systems, data and infrastructure—making this a high-impact insider threat.
Dominic Forrest, chief technology officer of iProov, said in an interview with Insider PH that deepfake technology has become widely accessible, often through free or low-cost tools, and is now simple enough for almost anyone to use.
These tools, he added, allow fraudsters to impersonate others in real time—such as during video calls—by swapping faces while maintaining natural movements and behavior.
Expanding scope of attacks
“The rate of deep-fake attacks is increasing very rapidly worldwide and the number of different attack methods available to criminals is also growing quickly. Traditionally, deep fakes were mainly used against national identity schemes and banks. Now, these attacks are being aimed at companies across all industries, not just financial institutions and governments,” he explained.
Forrest also noted that fraud has expanded beyond basic financial scams to include impersonation of internal employees to gain organizational access. This underscores the need for continuous identity verification, not only during hiring but throughout employment.
AI in hiring and workforce infiltration
In some cases, he said, in companies in the United States, North Korean actors used deepfakes and AI tools during job interviews, combining face-swapping with AI-generated responses. This has allowed individuals with limited technical expertise to pass as qualified candidates.
As a result, attackers were able to move freely within the organization and gain access to critical systems and confidential data.
PH as a high-risk target
At the national level, he said the Philippines is a high-risk target due to its large business process outsourcing and shared services sector, where a single breach can expose multiple global clients.
Government agencies handling public funds are also prime targets, while attackers increasingly see value across a wide range of industries.
“Tech companies are targets because criminals may want to steal products, source code, or intellectual property,” Forrest added.
To strengthen defenses, companies need to move beyond traditional IT security measures that only verify login credentials.
AI vs AI in cybersecurity
Forrest said AI is now being used on both sides: by fraudsters to carry out attacks, and by companies to detect deepfakes and verify real human presence.
The iProov workforce solution suite is designed for identity verification in areas such as onboarding and high-risk transactions, ensuring a real person is present during critical actions. It uses multiple layers of AI to detect threats and adapt to new attack methods.
“We can ensure that the person at the far end of a video call is a real human being, not a deep fake. We sit on the call for an interview, and we can tell if a person at the far end is a human being or if they've been deep faked in any shape or form,” he explained.
Addressing identity-based risks
The system is intended to address risks such as AI-enabled impersonation, social engineering, insider threats and third-party access, while reducing identity-based attacks and improving audit accountability.
“The core problem is knowing exactly who is in your workforce from the moment you recruit them and throughout their employment. It’s not only about recruitment, but also about verifying identity after they become employees,” he added.
Local adoption in the Philippines
In the Philippines, UnionDigital Bank (UDB) uses iProov to help prevent account takeovers, particularly during credential resets and new device onboarding.
Isiah Sison, customer identity and security innovation head of UDB, said in a statement that secure and reliable identity verification is central to delivering trusted digital banking, with continued investment in technology to protect customer interactions.
Forrest added that similar approaches can be applied beyond banking to employee access.
“Beyond banking, iProov ensures that the right employees are accessing systems, which is especially important in remote or outsourced setups where one compromised identity could affect multiple organizations,” he said.
‘We can’t trust what we see’
When asked about future threats as deepfakes continue to advance, particularly those that could impact the human workforce, Forrest said: “We have all got to understand we're in a world now where we can't trust what we see.”
He also emphasized the importance of relying on credible sources to verify information and identities, especially in an era marked by deepfakes and rapidly advancing artificial intelligence.
“Deepfakes are reaching a stage where you cannot distinguish them from real content by eye in many normal scenarios. In the future, even long, complex videos such as war footage, naval scenes and complex environments will be indistinguishable to the human eye. We can sometimes still see small clues in complex scenes that reveal fakery, but those clues will disappear as tech improves,” he warns.
Content Producer