“This work represents a significant step forward in strengthening our information advantage as we combat sophisticated disinformation campaigns and synthetic-media threats,” says Bustamante. Hive was chosen out of a pool of 36 companies to test its deepfake detection and attribution technology with the DOD. The contract could enable the department to detect and counter AI deception at scale.
Defending against deepfakes is “existential,” says Kevin Guo, Hive AI’s CEO. “This is the evolution of cyberwarfare.”
Hive’s technology has been trained on a large amount of content, some AI-generated and some not. It picks up on signals and patterns in AI-generated content that are invisible to the human eye but can be detected by an AI model.
“Turns out that every image generated by one of these generators has that sort of pattern in there if you know where to look for it,” says Guo. The Hive team constantly keeps track of new models and updates its technology accordingly.
The tools and methodologies developed through this initiative have the potential to be adapted for broader use, not only addressing defense-specific challenges but also safeguarding civilian institutions against disinformation, fraud, and deception, the DOD said in a statement.
Hive’s technology provides state-of-the-art performance in detecting AI-generated content, says Siwei Lyu, a professor of computer science and engineering at the University at Buffalo. He was not involved in Hive’s work but has tested its detection tools.
Ben Zhao, a professor at the University of Chicago, who has also independently evaluated Hive AI’s deepfake technology, agrees but points out that it is far from foolproof.
“Hive is certainly better than most of the commercial entities and some of the research techniques that we tried, but we also showed that it is not at all hard to circumvent,” Zhao says. The team found that adversaries could tamper with images in a way that bypassed Hive’s detection.
#Department #Defense #investing #deepfake #detection