New Delhi: American tech giant Google said it is committed to stopping this threat. malicious ai And deepfake material, and will work with the government on this matter.
“We appreciate the opportunity to meet with the government, jointly with industry stakeholders, to discuss the need to develop safeguards around synthetic materialAnd share how we are building tools and guardrails to address related risks. We are committed to continuing this collaboration and sharing our expertise and knowledge towards our collective goal of responsible AI development,” Google said in a statement after a meeting of industry representatives. Government.
Google is investing in tools like watermarking and synthetic audio detection to help people safely identify online content.
“At Google, we are developing AI in a way that maximizes positive benefits to society while solving challenges, guided by our AI principles. We’re also creating tools and guardrails to help prevent misuse of technology, as well as enable people to better evaluate information online. We will continue to apply high standards for information quality and security, based on our deep experience and in collaboration with experts around the world. We have long-standing, robust policies, technologies and systems in place to identify and remove harmful content on our products and platforms. We are applying this same ethos and approach when launching new products powered by generative AI technology,” the company further said.
It is understood that in the coming months, YouTube will introduce updates that will inform viewers that the content they are viewing is artificial.
It requires creators to disclose when they have created realistically altered or synthetic content, including using AI tools. Creators who consistently choose not to disclose this information may face content removal, suspension from the YouTube Partner Program, or other penalties.
“We appreciate the opportunity to meet with the government, jointly with industry stakeholders, to discuss the need to develop safeguards around synthetic materialAnd share how we are building tools and guardrails to address related risks. We are committed to continuing this collaboration and sharing our expertise and knowledge towards our collective goal of responsible AI development,” Google said in a statement after a meeting of industry representatives. Government.
Google is investing in tools like watermarking and synthetic audio detection to help people safely identify online content.
“At Google, we are developing AI in a way that maximizes positive benefits to society while solving challenges, guided by our AI principles. We’re also creating tools and guardrails to help prevent misuse of technology, as well as enable people to better evaluate information online. We will continue to apply high standards for information quality and security, based on our deep experience and in collaboration with experts around the world. We have long-standing, robust policies, technologies and systems in place to identify and remove harmful content on our products and platforms. We are applying this same ethos and approach when launching new products powered by generative AI technology,” the company further said.
It is understood that in the coming months, YouTube will introduce updates that will inform viewers that the content they are viewing is artificial.
It requires creators to disclose when they have created realistically altered or synthetic content, including using AI tools. Creators who consistently choose not to disclose this information may face content removal, suspension from the YouTube Partner Program, or other penalties.