A House of Commons committee has amended a draft bill to criminalise the non‑consensual sharing of AI‑generated sexual images,extending protection to pictures where the subject is nude or "nearly nude." The change follows concerns that the original wording would miss many deepfakes produced by tools such as Elon Musk’s Grok chatbot and proliferated on the X platform.
Why this matters
According to the committee’s report , the amendment plugs a loophole that could have left victims of AI‑driven sexual abuse without legal recourse. By broadening the definition beyond full nudity or explicit sexual acts, the legislation aligns with a growing global push to regulate synhtetic media. Countries such as the United States and Australia are already debating similar measures, reflecting a broader trend of governments reacting to the rapid diffusion of generative AI tools.
The move also signals a shift in how lawmakers view digital consent. As experts warned,platforms like X host a flood of AI‑generated content that skirts existing obscenity standards,making it harder to prosecute offenders under traditional pornography laws. By explicitly naming "nearly nude" images, the UK aims to set a precedent that could influnce future policy in the European Union and beyond, where regulators are wrestling with the balance between innovation and personal safety.
What we still don't know
While the committee’s amendment broadens the bill’s scope, it remains uncler how enforcement will be handled, what penalties are proposed,and whether the definition of "nearly nude" will be further clarified to avoid over‑reach. Additionally, the report does not detail how the law will address deepfakes generated by open‑source models outside major platforms.
Comments 0