Deepfakes have caused a number of issues a cross social media, especially those used to defraud others or for political purposes. As the technology becomes more sophisticated the challenges are set to grow.
To address the challenges that will arise, a new report from Northwestern University and the Brookings Institution outlines recommendations for defending against deepfakes. To support these recommendations, the researchers have developed deepfake videos in a laboratory, finding they can be created ‘with little difficulty’.
In setting out the warning, the researchers write: “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations…Security officials and policymakers will need to prepare accordingly.”
To structure the output of their inquiries, the researchers have developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes). This is a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
To test out the capability, the researchers used TREAD to create sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. The resulting video looks and sounds like al-Adnani (to the level of realistic facial expressions and audio), he is actually speaking words by Syrian President Bashar al-Assad.
The researchers created the lifelike video within hours. The process was relatively straight-forward, to the extent that the researchers say that militaries and security agencies need to assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
The key recommendations from the researchers argue for the U.S. and its allies to develop a code of conduct for responsible use of deepfakes.
The researchers predict the technology is on the brink of being used much more widely, and this could include targeted military and intelligence operations. The concern is that deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more.
Other recommendations made by the researchers designed to counterbalance the rise in deepfakes include educating the general public to increase digital literacy and critical reasoning.
The associated research report: “Deepfakes and international conflict,” has been published by Brookings.