← Back to blog

The State of Docs in 2026: Five Trends That Will Define the Next Era

AI readership is up 500%. Notion shipped 21,000 agents. Confluence got Rovo. GitBook published the State of Docs. Five trends from across the industry that tell us where documentation is heading.

Thinking Out Loud
The State of Docs in 2026: Five Trends That Will Define the Next Era

Every few months I block out a morning to just read. Not Rasepi code, not GitHub issues. Competitor blogs, industry reports, keynote announcements, developer surveys. Whatever shipped in the last quarter that touches documentation, knowledge management, or AI-assisted workflows.

I did that last week, and the picture that emerged was sharper than I expected. Not because any single announcement was groundbreaking, but because five separate trends are converging, and when you line them up, they paint a very clear picture of what documentation platforms will need to do in the next two years.

Here's what I found.

1. AI is the primary reader now. Not humans.

GitBook published a striking number in their AI docs data report: AI readership of documentation increased over 500% in 2025. Five hundred percent. That's not a rounding error.

Meanwhile, Stack Overflow's 2024 Developer Survey showed that 61% of developers spend more than 30 minutes a day searching for answers. But how they search has shifted. GitHub's own survey found 97% of enterprise developers have used AI coding tools. By 2026, 84% of developers use AI tools daily, with 41% of code now AI-generated. These people aren't navigating your wiki sidebar. They're asking Claude or Copilot, and the AI is reading your docs on their behalf.

The implication is hard to overstate. Your most frequent documentation consumer is no longer a person with a browser tab open. It's a language model making retrieval calls. And that model has no ability to squint at a page and think "hmm, this looks outdated."

GitBook spotted this early and responded with their State of Docs 2026 report and a push toward machine-readable formats. They also shipped skill.md, a convention for structuring product information specifically for AI agents. Google went further with their Gemini API Docs MCP, which connects coding agents to current documentation via the Model Context Protocol. Their reasoning was explicit: agents generate outdated code because their training data has a cutoff date. The MCP fix brought their eval pass rate to 96.3%.

So the first trend is settled. AI is the primary reader. The platforms that treat this as a core design constraint, not a feature to add later, will have a structural advantage.

2. Freshness and trust metadata are becoming mandatory

Anthropic interviewed 81,000 Claude users in December 2025 and published the results in March 2026. It's the largest qualitative study of AI users ever conducted (159 countries, 70 languages). The single most-cited concern? Unreliability. 27% of respondents named it as their top worry, and 79% of those people had experienced it firsthand.

That number should keep every documentation team up at night.

When AI answers are unreliable, the problem isn't always the model. Often the model is faithfully reproducing what it found in a stale document. The model didn't hallucinate. Your docs were just wrong, and nobody flagged them.

Stack Overflow's data reinforces this from a different angle: 81% of developers expect AI to be more integrated in how they document code in the coming year. If 81% of your users are feeding docs to AI, and 27% of AI users say unreliability is the biggest issue, you have a trust problem that no amount of prompt engineering fixes. The fix is at the source.

This is why freshness metadata matters. Not "last edited" timestamps (those tell you when someone touched the file, not whether the content is still accurate). Real freshness: review status, link health, translation alignment, readership signals, content drift detection. Metadata that a machine can read and use to decide whether a document is safe to cite.

I keep coming back to a simple framing. Your documentation needs a credit score. Not a timestamp. A credit score. (We've been building exactly this with Rasepi's freshness scoring system, and honestly, seeing the industry data only makes me more convinced it's the right call.)

3. Translation is moving from "project" to "pipeline"

DeepL published a piece in February called "The 6 Translation Transformations Global Businesses Can't Afford to Miss". Their argument: translation is becoming a continuous operating challenge, not a batch project you do quarterly.

That tracks with everything I see.

The old model was straightforward. Write in English. When you have budget, hire a translator or run it through a service. Get the translations back. Upload them. Done until next time. The problem is that "next time" comes faster and faster when your product ships weekly and your docs update constantly. By the time the German version is back from review, the English source has already changed twice.

DeepL's own Customization Hub now offers glossaries, style rules, and formality settings, which is great. But if those tools live outside your documentation platform, you're managing a translation toolchain: editor, export, translate, review, reimport, repeat. Every step is a chance for drift.

Notion has no native multilingual support at all. Confluence offers it through marketplace plugins. GitBook added auto-translate in August 2025, which is a step, but it operates at the page level.

The real shift is from page-level to block-level. When you track translations at the paragraph level, you only retranslate what actually changed. A typical edit touches maybe two paragraphs out of forty. That's 94% less translation work. (This is Rasepi's core translation architecture and, honestly, the thing I'm most proud of in the product. But even setting us aside, the industry direction is clear: continuous, incremental, embedded translation is where this is heading.)

4. AI agents need structured content, not wiki pages

This one crystallised for me when Notion announced Custom Agents in February. 21,000 agents built during early access. Agents that answer questions from knowledge bases, route tasks, compile status reports. Ramp alone has over 300 agents.

Atlassian went in a similar direction. Rovo AI in Confluence pulls context from across Atlassian and third-party apps to generate content. Their pitch: "context-rich, high-quality content grounded in your team's existing work."

And then Anthropic shipped agent teams in Claude Code, where multiple AI agents coordinate autonomously on complex tasks. Opus 4.6 scores 76% on the 8-needle 1M MRCR benchmark (up from 18.5% for the previous model), meaning it can actually retrieve information buried deep in massive document sets without losing track.

All three companies are building agents that consume documentation. None of them have solved the quality-of-source problem.

Notion's Custom Agents documentation explicitly acknowledges the prompt injection risk when agents read untrusted content. Atlassian's Rovo grabs whatever it finds in your Confluence. If that content is three months stale, Rovo doesn't know. It builds on it anyway.

For agents to work reliably, they need more than pages of text. They need structured content with stable identifiers, explicit freshness signals, clear classification metadata, and the ability to distinguish "this is current and reviewed" from "this exists but nobody's touched it in a year." Wiki pages don't provide that. Structured block-level content with trust metadata does.

5. Open source and self-hosting are making a comeback

This last one is more of a gut feeling backed by data than a single announcement.

GitBook open-sourced their published documentation in late 2024 and launched an OSS fund. Their reasoning: open source projects deserve free, high-quality documentation tooling. But the move also signals something broader.

Notion is cloud-only. No self-hosted option. Confluence Data Center exists but requires a license. When your documentation platform holds your most sensitive operational knowledge (incident playbooks, compliance procedures, architecture decisions), the question of "who controls this data?" is not abstract.

Anthropic's "Claude is a space to think" post from February made an interesting argument about trust and business models. Their core claim: advertising incentives are incompatible with a genuinely helpful AI assistant. They chose to stay ad-free so users can trust the tool.

I think there's a parallel for documentation platforms. If your docs system is closed-source and cloud-only, you can't verify what it feeds to AI. You can't audit the freshness calculations. You can't ensure your data stays in your control. For teams that are deploying AI assistants on top of their knowledge base (and increasingly, everyone is doing this), auditability matters.

This is not a polemic about open source being morally superior. Closed-source products can absolutely be trustworthy. But when you're building AI-powered workflows on top of your internal documentation, the ability to inspect and verify the system is a practical advantage. For us, MIT licensing Rasepi wasn't an afterthought. It was a design decision rooted in the same logic: documentation infrastructure should be auditable.

What these five trends mean together

Individually, each of these trends is manageable. AI reads your docs? Okay, add some machine-readable metadata. Freshness matters? Fine, add review dates. Translation needs to be continuous? Sure, integrate DeepL. Agents need structure? Fair, improve your content model. Sovereignty matters? Great, offer a self-hosted option.

But taken together, they describe a platform that looks fundamentally different from what most teams are using today.

The gap is architectural. These aren't five features you bolt on. They're five assumptions that need to be baked into the foundation. How content is stored (block-level, not page-level). How trust is modelled (freshness scores, not timestamps). How translation works (incremental, embedded, per-paragraph). How AI agents access content (structured APIs with metadata, not page scrapes). How data is controlled (open, auditable, self-hostable).

No established platform was designed around all five of these simultaneously. Some are adding them piece by piece. GitBook is moving fastest on the AI readability front. Notion is building agent infrastructure. Atlassian has enterprise distribution.

But designing for all five from day one? That's the advantage of starting fresh when the ground shifts.

I realise I'm biased here. We built Rasepi specifically because we saw these trends converging and wanted a platform that assumed all of them from the start. Block-level translation, forced expiry, freshness scoring, structured AI-ready content, open source. It's the thesis of the whole project.

But even if we didn't exist, I think any honest reading of what happened in the first quarter of 2026 points in the same direction. Documentation is becoming infrastructure. And infrastructure has different requirements than wiki pages.

The teams that figure this out first won't just have better docs. They'll have more reliable AI agents, lower translation costs, fewer compliance surprises, and knowledge bases that actually stay trustworthy over time.

That's the state of docs in 2026. The question isn't whether these trends are real. It's whether your platform was designed for them.

Five trends. One architectural question: was your documentation platform designed for 2026, or is it still serving assumptions from 2016?


Sources: GitBook AI docs data report, GitBook State of Docs 2026, GitBook skill.md, Google Gemini API Docs MCP, Stack Overflow 2024 Developer Survey, GitHub 2024 developer survey, Index.dev developer productivity statistics, Anthropic "What 81,000 People Want from AI", Anthropic "Claude is a space to think", Claude Opus 4.6, Notion Custom Agents, Atlassian Rovo in Confluence, DeepL "6 Translation Transformations", DeepL Customization Hub, GitBook open source documentation, GitBook auto-translate.

Keep your docs fresh. Automatically.

Rasepi enforces review dates, tracks content health, and publishes to 40+ languages.

Get started for free →