“We believe a democratic vision for AI is essential to unlocking its full potential and ensuring its benefits are broadly shared,” OpenAI wrote, echoing similar language in the White House memo. “We believe democracies should continue to take the lead in AI development, guided by values like freedom, fairness, and respect for human rights.”
It offered a number of ways OpenAI could help pursue that goal, including efforts to “streamline translation and summarization tasks, and study and mitigate civilian harm,” while still prohibiting its technology from being used to “harm people, destroy property, or develop weapons.” Above all, it was a message from OpenAI that it is on board with national security work.
The new policies emphasize “flexibility and compliance with the law,” says Heidy Khlaaf, a chief AI scientist at the AI Now Institute and a safety researcher who authored a paper with OpenAI in 2022 about the possible hazards of its technology in contexts including the military. The company’s pivot “ultimately signals an acceptability in carrying out activities related to military and warfare as the Pentagon and US military see fit,” she says.
Amazon, Google, and OpenAI’s partner and investor Microsoft have competed for the Pentagon’s cloud computing contracts for years. Those companies have learned that working with defense can be incredibly lucrative, and OpenAI’s pivot, which comes as the company expects $5 billion in losses and is reportedly exploring new revenue streams like advertising, could signal that it wants a piece of those contracts. Big Tech’s relationships with the military also no longer elicit the outrage and scrutiny that they once did. But OpenAI is not a cloud provider, and the technology it’s building stands to do much more than simply store and retrieve data. With this new partnership, OpenAI promises to help sort through data on the battlefield, provide insights about threats, and help make the decision-making process in war faster and more efficient.
OpenAI’s statements on national security perhaps raise more questions than they answer. The company wants to mitigate civilian harm, but for which civilians? Does contributing AI models to a program that takes down drones not count as developing weapons that could harm people?
“Defensive weapons are still indeed weapons,” Khlaaf says. They “can often be positioned offensively subject to the locale and aim of a mission.”
Beyond those questions, working in defense means that the world’s foremost AI company, which has had an incredible amount of leverage in the industry and has long pontificated about how to steward AI responsibly, will now work in a defense-tech industry that plays by an entirely different set of rules. In that system, when your customer is the US military, tech companies do not get to decide how their products are used.
#OpenAIs #defense #contract #completes #military #pivot