Imagine that just as voters are receiving their ballots in the mail, a video of a local candidate, caught in a compromising situation, surfaces on popular social media sites. Is it real…or a deepfake? According to The Guardian, “The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake.” The use of deepfakes can harm businesses, candidates, and government leaders by further eroding a declining trust in the media.
The false narrative created by deepfake videos makes it impossible for a viewer to distinguish real life events from synthetic events. This emerging technology allows bad actors to create videos, photographs, and audio feed that is compelling but exceedingly difficult to authenticate without the use of sophisticated detection software.
How realistic are deep fakes? Check out the TikTok account @ deepfaketomcruise to see deepfake videos of Tom Cruise that are eerily convincing. Except that’s not Tom Cruise we are watching, nor is it his voice. That is what makes this technology so scary.
Deepfake technology uses facial scans and dubbed audio to create a realistic fake video from an authentic video. And today, fake videos can deceive anyone. But for candidates, government officials, and members of the public, being duped can have grave consequences.
Here in Washington state, Senators Frockt, Dhingra, LIias and Stanford, at the request of the Secretary of State, introduced SB 5817 earlier this year. This act would restrict the use of synthetic media in campaigns for elective office.
Synthetic media is defined as an image, audio, or video recording that has been intentionally manipulated to appear to be a real individual’s actions or speech, but that did not actually occur. Further, synthetic media would cause a reasonable person to have a fundamentally different understanding or impression of the individual than they would have had from viewing the unaltered original version of the content.
SB 5817 would require a disclosure, with exceptions, when synthetic media is used in electioneering communications. It would also create a cause of action (a set of facts sufficient to justify suing,) for candidates whose voices or likenesses appear in synthetic media that is distributed without the required disclosure. To get around the niggling First Amendment, the bill requires the following disclosure: “This image/video/audio has been manipulated.”
Distributing synthetic media without this disclosure, would allow a candidate who is represented in a deepfake to seek injunctive relief or other equitable relief to prohibit the distribution of such media.
The disclosure exceptions mentioned above do not apply to radio or television stations that are airing synthetic media as part of a bona fide broadcast, news interview, documentary, or on-the-spot news coverage of events. That said, the broadcast must acknowledge there are questions about the authenticity of the synthetic media.
Nor does it apply to internet websites or other periodicals of general circulation that routinely carry news and commentary of general interest. SB 5817 requires those publications to state that the synthetic media does not accurately represent speech or conduct of the candidate, or that it constitutes satire or parody.
The federal government is concerned about synthetic media as well. In December, 2019, as part of the National Defense Authorization Act for fiscal year 2020, former President Trump signed the first federal legislation related to deepfakes. The measure received broad bipartisan support. The law requires a comprehensive report on the foreign weaponization of deepfakes; and requires the government to notify Congress of foreign deepfake disinformation activities targeting U.S. elections. It also established a “Deepfakes Prize” to encourage research and commercialization of deepfake detection technologies.
In late May of this year, the Senate Committee on Homeland Security and Government Affairs, issued a report to accompany S.2559, the establishment of a Deepfake Task Force Act. The report asserts:
“As the software underpinning these technologies becomes easier to acquire and use, the dissemination of deepfake content across trusted media platforms has the potential to undermine national security and erode public trust in our democracy, among other nefarious impacts.”
The Congressional Research Service points out that “state adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately.” Such videos could “erode public trust, negatively affect public discourse, or even sway an election.”
Deepfake examples have caused concern internationally as well. In June of this year, several mayors of European capitals were lured to participate in video calls with a deepfake of Kyiv Mayor Vitali Klitschko. Fifteen minutes into the call, Berlin Mayor Franziska Giffey was told by the Klitschko imposter that Ukrainian refugees were cheating the Germans out of state benefits; and he asked for the return of Ukrainian refugees for military service. Giffey became suspicious and when the call was interrupted, her staff contacted the Ukrainian Ambassador’s office, who confirmed that Giffey was not talking to the real Klitschko.
But the ruse did not stop with Giffey. The mayor of Vienna, Michael Ludwig, also issued a public statement that he had spoken with Klitschko in a video call. Shortly afterward, the statement was retracted, and his staff announced that Ludwig was the unfortunate victim of a cybercrime.
Jose Luis Martinez-Almeida, the mayor of Madrid, reported a similar experience. His office filed a complaint with the police about someone impersonating Klitschko in a video call.
Several European mayors and one bad actor. Fortunately, most of them quickly understood they were not talking to the real mayor of Kyiv. But it could have been three heads of state. And the stakes could have been much higher. So, what can we do to protect world leaders and public officials from being deceived by deepfake actors? What does this mean for the divisiveness in the U.S., or even the average Washingtonian?
Lamentably, it means we no longer have the “luxury” of believing what we see and hear.
Sources linked below..
Dick Conoboy
Oct 18, 2022Thanks, Elizabeth, for bringing forth this topic. But will the declaration have to divulge the manipulation that occurred? Or just make a simple statement?
And what if bad actors reproduce and then disseminate real videos, photos, etc. that they have doctored with a false advisory that the video or photo contains synthetic material? “This image/video/audio has been manipulated.” They would be telling the truth, no? Ceci n’est pas une pipe.
Elisabeth Britt
Oct 19, 2022Great question, Dick. The bill summary states that the disclosure must state that the media has been manipulated; and when appearing in:
Visual media, be printed in at least the largest font size of other text in the media or a size easily readable for the average viewer.
Video media, appear for the duration of the video; and
Audio media, be read in a clearly spoken manner and a pitch easily heard by the average listener at the beginning and the end of the audio and atleast every two minutes during the audio, if applicable.
But it is mute as to whether or not the actual manipulation must be revealed by the sponsor.
Under existing Public Disclosure law, all political advertising must identify the sponsor of the advertisement. Political advertising circulated as an independent expenditure which are distributed within 60 days of an election must also disclose the five top contributors to the advertisement’s sponsor (of at least $1,000) and the top three individual contributors to any of the top five donors which are political or incidental committees.
If passed as written, SB 5817 would:
1. Require a disclosure when any manipulated audio or visual media of a candidate is used in an electioneering communication.
2. Create a cause of action for candidates whose voices or likenesses appear in synthetic media distributed without disclosure.
3. Provide exceptions for parody and news reporting.
Staff asserts that this is a modest proposal that is a constitutional regulation of disinformation that will allow the public to understand the material they look at. If synthetic media is identified immediately, it can be reported to media platforms and stop its spread.
Those opposing the measure claim that it is an end run around the First Amendment. They agree with the idea in principle, but state that terms such as satire are ill-defined and state that there is a potential to abuse the cause of action.
Media broadcasters are concerned about liablity for the distribution of synthetic meda - existing law only regulates sponsors of electioneering communications, not those who share media or create it.
Elisabeth Britt
Oct 22, 2022Here’s an example of what might be defined as parody by SB 5817.
Earlier today, I found a Facebook media/news page, titled The Salish Duck. We don’t know who the owner of the Salish Duck is. The page administrator is currently hiding that information from the public. That could be a potential PDC violation, if the site’s administrator/owner publicly promotes a candidate or slate of candidates on the page.
It’s my understanding, after perusing Facebook’s policies on transparency, that a confirmed owner page is required to ensure that a page is not concealing its authentic ownership. Additionally, a page that runs ads about social issues, elections or politics in the USA must assign a page owner.
Facebook’s policy states that the Confirmed Page Owner’s page should contain the page owner’s verified legal name, city, country and/or phone number. Or the name of a blue-badge verified public figure, artist or band. Or a note that the Confirmed Owner’s information is not yet available or was hidden by the page administrator.
At this time, the Salish Duck is mute about it’s ownership.
I gather the Salish Duck’s administrator believes that the duck is exempt from Facebook’s transparency policies, since the page does not have a large U.S. audience and is not running paid ads about social issues, elections or politics on Facebook.
That said, the administrator is having alot of fun promoting certain candidates over others in some of the page’s parody video posts. Which raises the question, What is the PDC’s legal definition of a political ad? Must an ad be paid for? Or, does an attempt at mass communication via social media count as political advertising?
I’ll let readers read the PDC definition of what constitutes a political ad and make up their own minds.
Here’s the link to PDC Enforcement Guidelines.