In response to the growing threat of deepfake technology, the Pentagon’s Defense Innovation Unit is taking steps to adopt commercially available capabilities to detect and attribute manipulated multimedia content. Deepfakes, which use machine learning to create realistic but fabricated audio and visual material, have become increasingly common and harder to detect. The Pentagon is particularly concerned about the potential for adversaries to use deepfakes for deception and disinformation.
To address this issue, the Defense Innovation Unit is seeking solutions from industry partners that can meet their criteria for deepfake detection and attribution. The initiative comes ahead of the 2024 election, but officials emphasize that deepfake technology poses a threat at all times, not just during election cycles. Submissions for potential collaboration and investment are being accepted through June 27, with a focus on solutions that comply with Responsible AI Guidelines and open systems architecture.
The Defense Advanced Research Projects Agency has previously worked on tools to combat deepfakes since 2019, but the Defense Innovation Unit’s approach is complementary and distinct. They plan to work with industry partners to prototype and integrate cutting-edge solutions to address the threat of deepfakes. The DIU’s interest in commercially available solutions reflects the maturity of the market and the department’s desire to leverage external expertise in this critical technology area.
This initiative marks a significant step in the Pentagon’s efforts to address the growing threat posed by deepfake technology and underscores the importance of rapid adoption of detection and attribution capabilities to protect against deception and disinformation campaigns.
Source
Photo credit defensescoop.com