Despite a growing integration of artificial intelligence across LinkedIn, one feature isn’t meeting expectations, the platform’s AI writing assistant.
Highlights
- Low Adoption: LinkedIn CEO Ryan Roslansky admitted that the AI writing assistant hasn’t gained the traction the company expected among users.
- Professional Caution: Users are wary of using generative AI for public posts due to credibility concerns—“This is your résumé online,” Roslansky emphasized.
- Social Friction: The platform’s professional tone creates unique barriers to adopting AI tools publicly, unlike platforms like TikTok or X.
- AI Use Behind the Scenes: While public AI post creation is limited, many users still rely on external tools like ChatGPT or Claude to assist in drafting content before manual edits.
- AI Job Boom: LinkedIn has seen a 6x increase in job listings requiring AI skills and a 20x rise in users adding AI skills to their profiles.
- Reputation Risk: 53% of knowledge workers fear that visible AI use might make them look replaceable, despite 75% using AI in their daily work.
- Even Leaders Use It—Quietly: Roslansky joked about using Copilot to refine emails to Microsoft CEO Satya Nadella—showing even executives lean on AI, but discreetly.
In a recent interview with Bloomberg, LinkedIn CEO Ryan Roslansky acknowledged that user adoption of the tool has been lower than anticipated.
“It’s not as popular as I thought it would be, quite frankly,” Roslansky admitted.
The writing assistant, designed to help users refine posts before publishing, was expected to streamline content creation.
Roslansky believes the hesitation stems from the platform’s professional nature, where users are more cautious about their online presence. Unlike platforms such as TikTok or X, where AI-generated content may be met with humor or indifference, LinkedIn users are more concerned about credibility.
“This is your résumé online,” Roslansky noted. “Getting called out on LinkedIn could affect your ability to create economic opportunity for yourself.”
Social Friction Slows AI Adoption
Roslansky pointed out a unique kind of “social friction” that affects AI feature adoption on LinkedIn. While generative tools like Microsoft Copilot are being embraced in productivity settings, their use in public-facing content—particularly on LinkedIn—comes with reputational risks.
For professionals, especially job seekers or thought leaders, maintaining an authentic voice remains critical.
Growing AI Skills, But Cautious Use in Public Posts
The reluctance to use LinkedIn’s AI post assistant stands in contrast to the platform’s broader AI engagement. Over the past year, LinkedIn has recorded a sixfold increase in job listings requiring AI expertise, while the number of users adding AI-related skills to their profiles has surged by 20 times.
Despite that growth, many users prefer to keep AI use behind the scenes. A Wired analysis of nearly 8,800 long-form English-language posts on the platform estimated that more than half (54%) were likely AI-generated, though not necessarily using LinkedIn’s built-in tools.
Many users—especially non-native English speakers—opt to use external models like Claude or ChatGPT to generate draft content, which they later refine manually.
This points to a broader trend: AI is part of the content workflow, but not always visibly.
Professionals Worry About Perception
According to the LinkedIn–Microsoft 2025 Work Trend Index, 75% of knowledge workers are already using generative AI tools—but 53% fear that doing so might make them appear replaceable. That fear is especially potent on a platform like LinkedIn, where professional branding is often under the microscope.
Even CEOs Use AI—Cautiously
Roslansky shared that he too uses AI in his day-to-day communication, especially when emailing Microsoft CEO Satya Nadella.
“Every time, before I send him an email, I hit the Copilot button to make sure that I sound Satya-smart,” he joked.
Authenticity vs. Automation
While AI is rapidly reshaping the professional landscape, the LinkedIn CEO’s remarks highlight an important nuance: automation may improve productivity, but authenticity still defines trust—especially when public perception and career opportunities are on the line.
As companies continue to build AI tools for the workplace, they may also need to focus on easing the social and psychological barriers around public AI usage—because even the best technology can struggle without user trust.