X
    Categories: Legal News

California Lawmakers Ponder Stricter Measures Against Deepfakes

California legislators are considering new legislation to enhance their efforts in combating deep fake technology, explicitly targeting its misuse in political campaigns and explicit content. This move follows California’s pioneering stance on anti-deepfake legislation in 2019, positioning the state at the forefront of this critical issue. Less than a dozen states have implemented measures to regulate deepfake technology, with Michigan being the latest to join this initiative.

Existing Legislation

The existing California law, primarily championed by Assemblymember Marc Berman, addresses deepfakes in two distinct realms: pornography (A.B. 602) and political elections (A.B. 730). Both measures grant victims of deepfake manipulation the right to pursue legal action against those responsible for disseminating such deceptive content. Lawmakers acknowledge the need for an expanded scope, considering the rapid advancements in AI technology.

Advocacy and Criticism

While some advocates applaud the enacted measures as crucial safeguards, critics argue that existing laws have evident loopholes. Brandie Nonnecke, an associate professor of tech policy research at the University of California, Berkeley, contends that allowing victims to sue after the fact does not proactively mitigate the potential harm caused by deepfakes. Nonnecke suggests a shift in responsibility from users to social media platforms for flagging deceptive content.

Whether you’re a recent law school grad or an experienced attorney, BCG Attorney Search has the job for you.

Potential Barriers and Conflicts

The compatibility of state actions with federal law, particularly Section 230 of the Communications Decency Act, remains uncertain. This section shields social media platforms from liability for user-generated content. The current California deepfake law exempts platforms from monitoring deepfakes, raising questions about potential obligations on major tech entities.

Proposed Solutions and Legislative Actions

To prevent harm from deepfakes, Assemblymember Gail Pellerin intends to introduce legislation banning AI in political communications across various mediums, including social media, mailers, robocalls, and more. Drew Liebert, director of the California Institute for Technology and Democracy, emphasizes the need for a comprehensive ban on deepfakes, urging a reevaluation of the First Amendment in light of AI’s impact on democracy.

Brandie Nonnecke proposes that social media platforms leverage their technical capabilities to identify synthetically generated content before it is posted, potentially incorporating automatic disclosures for altered content.

Stay up-to-date without the overwhelming noise. Subscribe to JDJournal for a curated selection of the most relevant legal news.

Strengthening Existing Laws

Assemblymember Marc Berman acknowledges the potential weaknesses in the current legislation and is exploring additional measures to bolster the 2019 deepfake laws. Concerns include the applicability of the deepfake election law to wholly AI-generated content rather than just modifications of existing videos and images.

Don’t be a silent ninja! Let us know your thoughts in the comment section below.

Maria Lenin Laus: