Other platforms say “verified.” Verified by whom? When? Based on what? Rasepi shows you exactly why a document was downgraded and tells you when something in the real world makes your docs unreliable.
Most knowledge platforms let someone click a button that says “this is still accurate.” That badge tells you nothing about what happened since the last check. The real world does not wait for your review cycle.
Rasepi does more than count days since the last review. It watches for real changes and connects them to the docs they affect.
Connect your GitHub repos, CI/CD pipelines, monitoring tools, policy systems, and product configuration. Rasepi watches for changes that could impact your documentation: releases, deprecations, config updates, incident postmortems.
When a dependency changes (a new version ships, an API is deprecated, a tool is swapped out), Rasepi identifies which documents reference that source and evaluates the impact on each one.
Not just “this doc is stale.” The specific reason: “v2.0 released, your install guide still references v1.x” or “/api/v1/payments deprecated in latest deployment.” The signal is visible to everyone.
Reviewers see what changed, when it changed, and which sections are affected. Fix the specific parts, confirm the rest, and the trust score recovers immediately.
These are the kinds of changes that silently break documentation every day, and the signals Rasepi attaches to each one.
/api/v1/payments in favour of /api/v2/payments and ships the change.
Every trust score is a composite of internal and external signals. Each one is weighted, tracked, and visible in the score breakdown.
If you're deploying AI copilots, RAG pipelines, or enterprise search, you need a way to tell those tools which documents are worth citing. Trust scores give them that signal.
Set a trust threshold: “AI assistants may only cite documents with trust ≥ 0.8, reviewed within 30 days, and not flagged by an external change.” Rasepi enforces it. Your AI assistant never confidently quotes a doc that references a deprecated API.
Trust metadata is available via API, MCP server, and webhooks. Plug it into your RAG pipeline, your enterprise search ranker, or your internal AI governance layer. Every answer your AI gives can carry a trust signal.
Rasepi is API and AI first. The web interface is one client. The REST API, MCP server, and webhook system are equally capable. Your internal tools, CI pipelines, and AI assistants all consume the same trust data through the same endpoints. Explore the developer docs →
This is not about adding AI to docs. It is about making docs safe for AI. Without trust metadata, AI tools amplify outdated information with full confidence. With it, they know what to cite, what to warn about, and what to skip.
Different content has different shelf lives. Rasepi lets you define expiry policies by document type and enforces them automatically.
Fast-changing operational docs get short expiry windows. When tooling changes, the runbook gets flagged immediately. When it doesn't, it still gets reviewed monthly.
Stable policy docs don't need monthly reviews. But they do need to be checked when regulations change. Rasepi handles both: scheduled expiry and external triggers.
Competitive pricing changes fast. Weekly expiry ensures sales teams always have current numbers. Product config changes trigger immediate flags regardless of the schedule.
For compliance SOPs and security procedures, require the reviewer to formally attest that they've checked the content. The attestation is logged, timestamped, and tied to the trust score.
Guru and Tettra let you set a verification interval. Every 90 days, someone clicks “still accurate.” If a breaking change shipped yesterday, nobody knows until the next review cycle. The badge says “verified.” The content is already wrong.
Confluence and Notion don't track freshness. A page updated 3 years ago looks exactly like one updated today. Their AI features index everything with equal confidence. Atlassian Intelligence can summarise a page, but it cannot tell you whether the page is still true.
Rasepi gives every document a trust score backed by evidence. Your team sees why a doc was downgraded. Your AI tools know what's safe to cite. Nobody follows a runbook that references a tool you replaced six months ago.
Rasepi is in private beta. We're inviting teams in waves.