X has begun testing the use of AI-generated Community Notes, marking a significant shift in how the platform approaches crowdsourced fact-checking.
Highlights
- AI-Assisted Fact-Checking: X (formerly Twitter) is testing AI-generated drafts for Community Notes, using both its own Grok model and third-party LLMs like ChatGPT via API integrations.
- Human Review Still Required: All AI-generated notes will go through the existing human contributor consensus process before being published publicly, ensuring human oversight remains central.
- Transparency Measures: AI-written notes will carry clear labeling to differentiate them from human-written content, enhancing transparency for users.
- External Developer Participation: X is inviting outside developers to create their own AI-powered “Note Writers,” with strict review, testing, and approval phases before deployment.
- AI Risks Acknowledged: X’s research paper highlights known AI challenges like hallucinations and misinformation, stressing that AI tools will only assist—not replace—human fact-checkers.
- Scaling Capacity: X hopes the AI-assisted approach will help scale Community Notes output and respond faster to viral misinformation, while maintaining quality and neutrality standards.
- Performance Monitoring: AI bots will build or lose publishing privileges over time, depending on the community’s helpfulness ratings of their draft notes.
- Current Status: The AI Community Notes feature remains in limited testing with no set timeline for full rollout.
This pilot initiative explores whether generative AI tools can assist in creating context around potentially misleading posts—without compromising accuracy or trust.
Expanding Community Notes with AI Assistance
Community Notes, first introduced during Twitter’s earlier ownership and expanded under Elon Musk’s leadership, has become a central part of X’s content moderation strategy.
The feature allows verified contributors to add explanatory notes to posts. However, for a note to appear publicly, it must pass a consensus test, where users with differing viewpoints agree on its helpfulness and factual accuracy.
Until now, the system has been entirely human-driven. With the new pilot, AI chatbots—both X’s proprietary model Grok and third-party large language models integrated via API—will begin contributing draft notes.
These AI-generated notes will still pass through the same human review pipeline and community consensus process before publication.
Addressing Risks of AI-Generated Content
The use of AI in fact-checking is not without concerns. Generative AI models are known for “hallucinations”—producing inaccurate or misleading information with confidence. X acknowledges this risk in a newly released research paper authored by its Community Notes team.
According to the paper, the goal is not for AI to replace human judgment but to support contributors with draft suggestions and informational context, which humans can review and edit. Over time, human feedback may also help fine-tune the AI models through reinforcement learning.
“The goal is not to create an AI assistant that tells users what to think,” the research paper clarifies. Instead, X aims to enhance critical thinking and contextual understanding among its users.
Developer Access and External AI Contributions
As part of the program, X is inviting external developers to build their own AI-powered “Note Writers”. These bots can leverage models like Grok or OpenAI’s ChatGPT but must go through X’s internal review and testing process.
Before they are allowed to publish live notes, these AI-generated contributions will remain in a restricted test environment, with performance monitored for helpfulness and accuracy.
Maintaining Human Oversight
Despite AI’s involvement in drafting notes, human approval remains mandatory for all public posts.
Community Notes—whether written by humans or generated by AI—are only published if contributors from diverse perspectives rate them as helpful through X’s established rating and consensus system.
This human-in-the-loop design aims to prevent errors and bias, addressing key concerns from both users and industry analysts about over-reliance on automated systems in sensitive areas like fact-checking.
Scaling Fact-Checking Capacity
X anticipates that AI-assisted drafting could increase the daily volume of Community Notes, helping the platform respond faster to viral misinformation.
The company emphasizes that quality standards for accuracy and neutrality remain unchanged, regardless of whether a note originates from a human or an AI tool.
Transparency Measures
To maintain trust, all AI-generated notes will be clearly labeled, making it transparent to users when a contribution comes from an AI system.
AI-powered Note Writers will build or lose publishing privileges over time, based on how consistently their drafts are rated as helpful by the community.
This move comes as other platforms like Meta, TikTok, and YouTube are also exploring or implementing community-driven fact-checking systems. Meta, for example, recently transitioned away from third-party fact-checkers to test its own Community Notes-like model.