OpenAI is introducing a new identity verification process for organizations accessing its API, potentially making it a requirement for using future advanced AI models.
Highlights
The new measure, named “Verified Organization,” appeared in a recent support document and reflects the company’s efforts to tighten access as its technologies grow more powerful and the risks of misuse increase.
Under the proposed system, organizations will be required to submit a government-issued ID from a country supported by OpenAI’s API services.
Each ID may only be used to verify one organization within a 90-day period, and OpenAI notes that not all applicants will be eligible. While not yet mandatory, this move suggests that future high-tier models may only be accessible to verified users.
OpenAI has stated that the decision is aimed at enhancing platform safety and encouraging responsible use.
According to the company, a small number of developers have used the API in ways that violate OpenAI’s usage policies. The verification process is being introduced to help prevent such misuse while still allowing access for developers operating within the guidelines.
This policy change appears to align with OpenAI’s broader focus on risk mitigation as AI capabilities evolve. It also reflects a growing need to manage who can access increasingly advanced tools, especially in light of recent security concerns.
In one instance, OpenAI disclosed that it is investigating possible unauthorized data usage by a group affiliated with DeepSeek, a China-based AI research lab.
According to reports, this group may have used OpenAI’s API to collect training data in late 2024, raising concerns about violations of terms of service and cross-border data security.
These developments have prompted the company to introduce stronger access controls, aiming to balance innovation with accountability.
API Access Restrictions Targeting Specific Regions
Beginning July 9, 2024, OpenAI plans to implement stricter API access restrictions for countries not listed among its supported regions.
While the company has not published an official list, multiple reports suggest that China (including Hong Kong), Russia, North Korea, and Iran will be among the affected countries.
Developers have reportedly received notifications from OpenAI about API traffic originating from unsupported locations. Organizations are being advised to review and ensure compliance with regional policies to avoid disruptions in service.
Concerns Over State-Backed Misuse of AI Tools
OpenAI has acknowledged incidents where state-sponsored actors have attempted to use its models for covert influence operations.
Countries named in these reports include Russia, China, Iran, and Israel. Although the scale and impact of these efforts have been limited, they underscore the potential misuse of open-access AI systems.
The identity verification initiative is intended to reduce such risks by ensuring that only organizations meeting strict criteria can access more advanced models.
Overview of the Verification Requirements
The Verified Organization process will require submission of a government-issued ID from a supported country. OpenAI limits ID usage to one organization per 90-day period, and eligibility is subject to the company’s review.
This step is part of a broader strategy to make AI development and deployment more secure and traceable. Last year, OpenAI also restricted access to its services in China, suggesting that this approach is not new but part of a gradual recalibration.