Search
Published on:
Artificial Intelligence-Generated Content in Political Ads Raises New Concerns for Broadcasters
With the Iowa Republican Caucus happening in mid-January and dozens of additional primaries and caucuses to follow before the 2024 general election, broadcasters need to be aware of the use of artificial intelligence (AI), deepfakes and synthetic media in political advertising and the various laws at play when such content is used. These laws seek to ensure that viewers and listeners are made aware that the person they are seeing or the voice they are hearing in political advertising may not be who it looks like or sounds like. Campaigns, political committees, super PACs, special interest groups and other political advertisers are using AI, deepfakes and synthetic media in advertisements, making it easier to mislead and misinform viewers and listeners.
Broadcasters should be aware of the legal issues advertisements that use AI or that employ other methods intended to deceive can create. Under federal law, broadcast stations are not liable for the content of political ads that qualify as a “use” by a legally qualified candidate. That liability shield does not, however, extend to ads from third-party groups, PACs and other non-candidate entities. As noted below, certain states have already enacted laws governing the use of AI in political communications.
Federal Efforts
There is no federal law or regulation prohibiting the use of AI in political advertisements or requiring a disclaimer, even if the ad uses AI to mislead, deceive or spread misinformation. U.S. Representative Yvette Clarke (D-N.Y.) has introduced H.R. 3044, the REAL Political Advertisements Act that, if enacted, would require political advertisements that were generated in whole or in part using AI to include a clear and conspicuous disclaimer that alerts the viewer or listener that the ad contains such content. The bill has not garnered any cosponsors or had a hearing. U.S. Senator Amy Klobuchar (D-Minn.) has introduced S. 2770, the Protect Elections from Deceptive AI Act, that seeks to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for federal office. A bipartisan group of five senators has cosponsored the bill, but a hearing on the bill has not occurred.
In response to a Petition for Rulemaking filed by nonprofit advocacy organization Public Citizen, the Federal Election Commission (FEC) accepted comments through mid-October 2023 on whether it should amend its regulation on “fraudulent misrepresentation” to clarify that “the restrictions and penalties of the law and the Code of Regulations are applicable” should “candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false [Artificial Intelligence]-generated content in campaign ads or other communications.” Thousands of individuals and entities submitted comments. The FEC has yet to signal whether it will move forward, despite calls from more than 50 members of Congress, and broad support from the public, for the FEC to adopt such rules.
State Efforts
Thirteen states have either passed laws governing the use of AI in political advertising or are exploring legislative action. What follows is a review of states that have passed or are considering laws governing the use of AI in political advertising. Some states’ legislation or statutes protect broadcast stations from liability for their role in distributing violative content. In other cases, the bills and statutes as written are unclear on whether broadcast stations could be held liable for distributing content that does not comply with that state’s legislation. Broadcasters should consult communications counsel before airing any advertisement that contains AI-generated content, particularly where it might be considered deceptive.
California (AB 730) (Enacted October 2019)
As of January 1, 2023, it is a violation of California law to, with actual malice, produce, distribute, publish or broadcast campaign material that contains (1) a picture or photograph of a person or persons into which the image of a candidate for public office is superimposed or (2) a picture or photograph of a candidate for public office into which the image of another person or persons is superimposed, unless the picture or photograph in the campaign material includes: “This picture is not an accurate representation of fact.” Within 60 days of an election, it is a violation of state law to distribute, with actual malice, materially deceptive manipulated audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating: “This [image/video/audio] has been manipulated.”
A candidate for elective office whose voice or likeness appears in a materially deceptive audio or visual media may bring an action for general or special damages against the person, committee or other entity that distributed the materially deceptive audio or visual media. These provisions do not apply to broadcast stations.
Florida (SB 850) (Filed December 2023)
This pending bill would require any political advertisement containing images, video, audio, text or other digital content created in whole or in part with the use of generative artificial intelligence where the generated content appears to depict a real person performing an action that did not actually occur to prominently disclaim: “Created in whole or in part with the use of generative artificial intelligence (AI).” Failing to include such a disclaimer could lead to civil penalties. As currently written, there is no liability exemption for broadcast stations.
Illinois (SB 1742) (Introduced February 2023)
This pending bill provides that a person commits a Class A misdemeanor if the person, with intent to injure a candidate or influence the result of an election, creates a deepfake video and causes the deepfake video to be published or distributed within 30 days of an election. The bill appears to exempt the broadcaster from liability if it merely distributes the ad.
Kentucky (BR 26) (Bill Requested for 2024 Regular Session)
This pending bill would make it unlawful for any person to willfully and knowingly disseminate a deepfake of a depicted individual without the express, written consent of the depicted individual. Violators are personally liable for appropriate injunctive relief, actual damages, punitive damages, court costs and reasonable attorney’s fees. The bill as written does not address broadcaster liability.
Michigan (HB 5141) (Enacted November 2023)
Effective February 13, 2024, any qualified political advertisement created, originally published or originally distributed by a person, committee or other entity must clearly and conspicuously state: “This message was generated in whole or substantially by artificial intelligence.”
Violators are subject to civil fines of $250 for the first violation and $1,000 for each subsequent violation. Each advertisement is a separate violation. The penalties do not apply to broadcast stations that are paid to broadcast the political advertisement.
Minnesota (HF 1370) (Enacted May 2023)
Effective August 1, 2023, it is a crime under Minnesota law to disseminate a video, photograph or sound recording that was altered through technological means and appears to authentically depict the speech or conduct of a person who did not engage in that speech or conduct if the dissemination occurs within 90 days of an election, takes place without the consent of the depicted person and is disseminated with the intent to influence an election. The statute does not include a broadcaster liability shield.
New Hampshire (HB 1596) (Introduced December 2023)
This pending bill requires the disclosure of deceptive synthetic media and deceptive and fraudulent deepfake material in political advertising. It provides that a person, corporation, committee or other entity shall not, within 90 days of an election at which a candidate for elective office will appear on the ballot, distribute a synthetic media message that the person, corporation, committee or other entity knows or should have known is a deceptive and fraudulent deepfake, of a candidate or party on the state or local ballot without a disclosure that states: “This [image/video/audio] has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.”
These provisions do not apply to broadcast stations that broadcast a (1) deceptive and fraudulent deepfake as part of a bona fide newscast, (2) news interview, (3) news documentary, or (4) on-the-spot coverage of bona fide news events. In such cases, the broadcast must clearly acknowledge through conspicuous content or a disclosure that there are questions about the authenticity of the materially deceptive content. These provisions also will not apply to a broadcast station when it is paid to broadcast a deceptive and fraudulent deepfake where it has made a good faith effort to establish the depiction is not a deceptive and fraudulent deepfake.
New Jersey (A 5510) (Passed Assembly June 2023)
This pending bill prohibits a person from knowingly or recklessly distributing deceptive audio or visual media within 90 days of an election in which a candidate will appear on the ballot, with the intent to deceive a voter with false information about the candidate or the election. The prohibition does not apply if the advertisement states: “This advertisement contains manipulated images or sound.”
A registered voter may seek injunctive or other equitable relief prohibiting the distribution of a deepfake, and a candidate whose voice or likeness appears in a distributed deepfake may bring an action for general or special damages against the person that distributed media.
These provisions do not apply to broadcast stations that routinely carry news and commentary of general interest, and that broadcast or publish a deepfake audio or visual message for the purpose of disseminating newsworthy facts, so long as the broadcast or publication also clearly contains the required disclosure. The broadcaster exclusion does not appear to protect stations that broadcast political ads, but do not routinely carry news and commentary of general interest.
New York
(A 7106A) (Introduced May 2023)
This pending bill requires that any political communication which was produced by or includes any synthetic media must disclose the use of such synthetic media by stating: “This political communication was created with the assistance of artificial intelligence.”
(S 7592) (Introduced July 2023)
This pending bill requires that any political communication that uses an image or video footage that was generated in whole or in part with the use of artificial intelligence include a disclosure stating: “This communication was generated using artificial intelligence.”
South Carolina (H 4660) (Introduced December 2023)
This pending bill provides that a person, corporation, committee or other entity shall not, within 90 days of an election at which a candidate for elective office will appear on the ballot, distribute a synthetic media message that the person, corporation, committee or other entity knows or should have known is a deceptive and fraudulent deepfake of that candidate, unless the message includes a disclosure stating: “This [image/video/audio] has been manipulated or generated by artificial intelligence.”
A candidate whose appearance, action or speech is depicted through the use of a deceptive and fraudulent deepfake may seek injunctive or other equitable relief prohibiting the publication of such deceptive and fraudulent deepfake and may also bring an action for general or special damages against the sponsor. Violators are guilty of a misdemeanor and must be imprisoned not more than 90 days or fined not more than $500, or both. For a second offense occurring within five years of a previous conviction, the violator is guilty of a felony and must be imprisoned not more than five years or fined not more than $1,000, or both.
These provisions do not apply to broadcast stations that broadcast a (1) deceptive and fraudulent deepfake as part of a bona fide newscast, (2) news interview, (3) news documentary, or (4) on-the-spot coverage of bona fide news events. In such cases, the broadcast must clearly acknowledge through conspicuous content or a disclosure that there are questions about the authenticity of the materially deceptive content. These provisions also will not apply to a broadcast station when it is paid to broadcast a deceptive and fraudulent deepfake where it has made a good faith effort to establish the depiction is not a deceptive and fraudulent deepfake.
Texas (SB 751) (Enacted June 2019)
Under Texas law as of September 1, 2019, a person commits an offense if the person, with intent to injure a candidate or influence the result of an election, creates a deepfake video and causes the deepfake video to be published or distributed within 30 days of an election. The bill appears to exempt the broadcaster from liability if it merely distributes the ad.
Washington (SB 5152) (Enacted May 2023)
Under Washington law as of July 23, 2023, an electioneering communication which contains synthetic media may not be distributed without a disclosure that the content has been manipulated.
A candidate whose voice or likeness appears in synthetic media distributed without the required disclosure within 60 days of an election may seek to enjoin distribution of the media and bring an action for general or special damages against the party distributing the media. A federally licensed broadcast station subject to federal law prohibiting censorship of electioneering communications by a legally qualified candidate is exempt from liability.
Wisconsin (SB 644) (Introduced November 2023)
This pending bill requires that any audio or video communication that is paid for by a candidate committee, legislative campaign committee, political action committee, independent expenditure committee, political party, recall committee or referendum committee include a disclosure that states the communication “[c]ontains content generated by AI” and includes other disclosure requirements specific to each type of communication. Violators are subject to a fine of up to $1,000 for each violation. The bill does not address broadcaster liability.
* * * * *
Broadcasters should stay in close contact with a member of Pillsbury’s Communications Practice for advice on complying with all political broadcasting laws, including any relevant laws relating to the use of AI in political ads.
A PDF of this article can be found at Artificial Intelligence-Generated Content in Political Ads Raises New Concerns for Broadcasters.