OpenAI has introduced a new identity verification system called ‘Verified Organisation,’ requiring organizations to submit government-issued IDs to access its most advanced AI models. Each ID can verify only one organization every 90 days, and not all applicants may qualify. The move aims to enhance security and prevent misuse, such as intellectual property theft or violations of API policies. But is this a necessary safeguard or an unnecessary hurdle for innovation?
Why Verification Now?
OpenAI’s decision follows incidents of misuse, including reports of unauthorized data extraction by groups allegedly linked to North Korea and the Chinese lab DeepSeek in late 2024. The company had already restricted services in China earlier that year due to concerns over policy violations, such as social media monitoring using ChatGPT accounts. As AI models grow more powerful, OpenAI has steadily tightened access controls to prevent exploitation.
‘This isn’t just about compliance—it’s about ensuring these tools aren’t weaponized,’ says a source familiar with OpenAI’s security policies. The Verified Organisation system is designed to curb abuse while maintaining access for legitimate users.
How Verification Works
Organizations must submit a government-issued ID from supported countries to complete verification. Key details of the process include:
Requirement | Details |
---|---|
ID Type | Government-issued (e.g., passport, business license) |
Frequency | One ID can verify only one organization per 90 days |
Eligibility | Not all applicants may qualify, per OpenAI’s discretion |
The restrictions aim to prevent mass verifications or fraudulent sign-ups. But some developers worry this could slow down access for smaller teams or startups. ‘Will this create a two-tier system where only well-established players get priority?’ asks an independent AI researcher.
Accountability vs. Accessibility
OpenAI’s approach reflects a broader industry trend toward stricter AI governance. While the Verified Organisation system may deter bad actors, critics argue it could stifle innovation by adding bureaucratic layers. Supporters, however, see it as a necessary step to ensure responsible AI deployment.
As one tech policy analyst notes, ‘The balance between security and openness is delicate. OpenAI is betting that verified access will do more good than harm in the long run.’ Whether this system becomes a model for accountability or a barrier to innovation remains to be seen—but for now, the era of anonymous AI access is coming to an end.