Artificial Intelligence Poses a Huge Threat. It's Time to Put a Seatbelt on It

The 2024 presidential election is rapidly approaching, and we aren't prepared for how new artificial intelligence models will affect our elections. Fake, A.I. generated images of former President Trump resisting arrest provided another fresh example of the firehose of lies, deception and false information that AI threatens to unleash on our electoral process. The potential for AI-generated deep-fake videos, text-based chatbots, and voice-synthesis technologies to disrupt elections in 2024 is nearly limitless, especially as the code for powerful AI large language models leak to the worst parts of the internet like 4Chan.

Even AI knows it: We recently asked OpenAI's AI text generator, ChatGPT, about the risks that AI technology could be used to carry out political sabotage. Here's how ChatGPT responded:

"Yes, political sabotage is a real risk of AI applications. As AI becomes more advanced, it could be used to manipulate public opinion, spread propaganda, or launch cyber-attacks on critical infrastructure. AI-powered social media bots can be used to amplify certain messages or opinions, creating the illusion of popular support or opposition to a particular issue. AI algorithms can also be used to create and spread fake news or disinformation, which can influence public opinion and sway elections…”

No doubt campaign lies, immense data gathering, and biased algorithms are not new concepts. What is new is the scale at which these tools can now be used to further polarize our society.

It's led some to call for an outright moratorium on AI development, but to us, that's a bit extreme. Instead, our focus should be on making sure we control AI, and not the other way around. We need to focus especially on how to protect our political system.

One would think the developers of these technologies would be concerned about bringing a new Frankenstein monster into the world, and would take every step possible to protect us from their vulnerability to abuse. It's not a heavy lift; just ask ChatGPT, like we did: We posed the question of whether OpenAI could label its output so that people would know content was generated by AI rather than by a real person. ChatGPT immediately responded:

"Yes! OpenAI and other AI companies could add digital watermarks and metadata to label content as generated by AI, and make the labels nearly-indelible through encryption…"

Jonathan M. Winer in Newsweek

Jonathan M. Winer

Jonathan M. Winer has been the United States Special Envoy for Libya, the deputy assistant secretary of state for international law enforcement, and counsel to United States Senator John Kerry. He has written and lectured widely on U.S. Middle East policy, counter-terrorism, international money laundering, illicit networks, corruption, and U.S.-Russia issues.

In 2016, Winer received the highest award granted by the Secretary of State, for “extraordinary service to the U.S. government” in avoiding the massacre of over 3,000 members of an Iranian dissident group in Iraq, and for leading U.S. policy in Libya “from a major foreign policy embarrassment to a fragile but democratic, internationally recognized government.” In 1999, he received the Department’s second highest award, for having “created the capacity of the Department and the U.S. government to deal with international crime and criminal justice as important foreign policy functions." The award stated that "the scope and significance of his achievements are virtually unprecedented for any single official."

Previous
Previous

The Red States Experimenting with Authoritarianism

Next
Next

British Foreign Policy in a Broken World