Salceda said the proposal aims to ensure audiences do not mistake AI-generated material for authentic, human-made content.
“The ability of AI tools to produce highly realistic images, videos, and voice recordings presents both opportunities and risks,” Salceda said. “Without clear disclosure, the public can be misled, whether intentionally or unintentionally, into believing that synthetic content is real.”
What’s in the proposal
Under Salceda’s proposal, all institutions, platforms and production houses using generative AI to produce multimedia for public consumption would be required to include a visible and legible declaration — such as “This content contains AI-generated elements” — at the start of, or alongside, the content.
The requirement would apply to newsrooms, educational institutions, advertising agencies, government offices and other organizations producing public-facing materials.
Salceda cited the University of the Philippines Los Baños as an example, noting that some courses permit AI tools for schoolwork but require disclosure of their use.
“This is a model for responsible adoption that balances innovation with transparency,” he said.
It's about transparency
Salceda also said online platforms should be required to flag when content is AI-generated or contains AI elements.
“Some platforms already attempt to do this, but it is neither comprehensive nor consistently effective,” he said. “The public needs a reliable, standardized system for such labeling.”
The lawmaker stressed the measure is not intended to ban AI.
“This is about transparency, not about banning AI,” Salceda said. “We want to promote responsible use. AI is a powerful tool for creativity and productivity, but when it comes to content that shapes public perception, especially political, historical and news-related materials, the public has the right to know when what they are seeing or hearing was created by a machine.”
Policy paper to be filed in Congress
Salceda warned that without regulation, AI-generated deepfakes, voice clones and photorealistic forgeries could be used for disinformation, reputational harm or market manipulation.
He noted that similar disclosure frameworks are being adopted in the European Union and parts of the United States.
“This is not censorship. It is the equivalent of a food label,” he added. “People can still consume the content, but they deserve to know what it is made of.”
The Institute for Risk and Strategic Studies will submit a policy paper this month to Congress and relevant regulatory agencies outlining the proposed legal and technical framework for an AI content declaration requirement, including recommended penalties for noncompliance. —Ed: Corrie S. Narisma