OpenAI Revises Usage Policy
On January 10, ChatGPT maker OpenAI revised its usage terms to remove an outright ban on using their models for military and wartime objectives. While the revised policy does not specifically prohibit military use, it does state that users should not use their services to damage themselves or others. One of the most serious consequences cited by the corporation is the development or use of weapons.
Since the amended policy, OpenAI has said that national security AI applications are consistent with its objective. "For example, we are already collaborating with DARPA [the US Defense Advanced Research Projects Agency] to develop new cybersecurity tools to secure open source software that vital infrastructure and industry rely on.
Highlights:
- OpenAI Revises Usage Terms Allowing Collaboration with Military; Raises Concerns.
- India's IT Minister Highlights Policy Change as Confirmation of AI's Military Use.
- Potential Impacts on India: Data Protection Concerns, Vulnerability of Security Personnel Data, and the Need for Strategic Regulations.
It was unclear if these beneficial use cases would have been permitted under our old "military" policies. According to a TechCrunch story, the company's purpose with this policy amendment is to provide clarity and allow for these debates. DARPA announced its work with OpenAI Anthropic, Google, and Microsoft on the building of cybersecurity technologies back in August 2023.
Why must we know about it?
While OpenAI has explained the amended policy, which includes its work on cybersecurity technologies with DARPA, the change suggests that the business is relaxing its stance on the military use of artificial intelligence (AI). The US military has been employing artificial intelligence for a while. According to the Associated Press, during the Russia-Ukraine conflict, the US military controlled small surveillance drones equipped with artificial intelligence. It has also used AI to measure soldier fitness, keep track of opponents in space, and determine when Air Force planes require maintenance. It would be intriguing to see if OpenAI and other AI startups would collaborate with the US military and the militaries of other countries for similar goals.
Interestingly, India's IT Minister, Rajeev Chandrasekhar, cited the amended usage policy as "confirmation that AI can and will be used for military purposes." He noted that this reinforces India's approach to AI regulation based on safety, trust, and accountability.
Key Facts:
OpenAI has secretly amended its agreements to allow it to cooperate with the military and in warfare. This is a concerning development, especially given that OpenAI has scraped a vast quantity of publicly available data from throughout the world. While it states that its technology should not be used for harm, this does not preclude its usage for military and wartime reasons.
Now, how does the use of AI in the military and battle affect India? I don't want to be alarmist here, but IF this is a sign of intent, here are my thoughts:
1. No data protection: India's data protection law exempts publicly available personal data. It can be used for monitoring, training, and strategic planning, with the option of microtargeting specific individuals. We made a mistake with the data protection law.
2. Generative AI can analyze massive datasets to uncover and identify weaknesses and cyberattack techniques.
3. Data about identified security personnel is especially vulnerable. For example, location information from security personnel on patrol. Remember the Strava data leak? It is suitable for simulation exercises and mission planning. Strava collected patrol data in conflict zones because the military used it.
4. Can be utilized to create and train autonomous reconnaissance systems.
5. Facial data can be utilized for target recognition.
What can we do?
- Amend or establish guidelines that limit the use of publicly available personal data for AI, military, and wartime objectives.
- Discourage military and defense personnel from using foreign AI tools.
- Increased resources for building Indian AI (we are currently doing a terrific job).
- Determine what data OpenAI has collected from Indian citizens. Subject them to technical examination with regard to datasets, with the option of ordering them to erase datasets that may endanger Indians.