Your trust score comes with evidence.

Other platforms say “verified.” Verified by whom? When? Based on what? Rasepi shows you exactly why a document was downgraded and tells you when something in the real world makes your docs unreliable.

Trust Score Breakdown Evidence Trail
Getting Started Guide
Engineering · Updated 2 weeks ago
54
⚠ v2.0 released, guide still references v1.x ✓ Review status: current ✓ All 4 languages aligned ✓ All links valid
Payments Integration Guide
Engineering · Updated 5 months ago
31
⚠ /api/v1/payments deprecated in latest deploy Overdue by 22d German version outdated
Employee Benefits FAQ
HR · Updated 1 week ago
96
✓ No external changes detected ✓ High readership ✓ All 5 languages current

The problem with “verified”

Most knowledge platforms let someone click a button that says “this is still accurate.” That badge tells you nothing about what happened since the last check. The real world does not wait for your review cycle.

❌ Scheduled verification

  • Someone clicks "verified" every 90 days
  • No idea if a dependency shipped a breaking change yesterday
  • API deprecated last week? Badge still says "verified"
  • Postmortem filed an hour ago? Runbook looks fine
  • Trust is a badge, not a signal

✔ Evidence-based trust

  • Trust score reacts to real-world changes in real time
  • Every score drop comes with the specific trigger
  • External changes flagged before anyone reads stale content
  • Reviewers see exactly what changed and why
  • Trust is observable, auditable, and machine-readable

How trust scoring works

Rasepi does more than count days since the last review. It watches for real changes and connects them to the docs they affect.

1

Rasepi monitors your sources

Connect your GitHub repos, CI/CD pipelines, monitoring tools, policy systems, and product configuration. Rasepi watches for changes that could impact your documentation: releases, deprecations, config updates, incident postmortems.

2

Changes trigger impact analysis

When a dependency changes (a new version ships, an API is deprecated, a tool is swapped out), Rasepi identifies which documents reference that source and evaluates the impact on each one.

3

Affected docs get flagged with evidence

Not just “this doc is stale.” The specific reason: “v2.0 released, your install guide still references v1.x” or “/api/v1/payments deprecated in latest deployment.” The signal is visible to everyone.

4

Your team reviews with full context

Reviewers see what changed, when it changed, and which sections are affected. Fix the specific parts, confirm the rest, and the trust score recovers immediately.

This is what it looks like in practice

These are the kinds of changes that silently break documentation every day, and the signals Rasepi attaches to each one.

🚀 Your team ships a major release
What happened Your team merges a PR that bumps the minimum Node.js version from 18 to 20 and ships v2.0.
What Rasepi does Flags the Getting Started guide and 3 deployment docs: “Dependency requirement changed. Guides still reference Node 18 and v1.x CLI commands.”
Trust score Drops from 91 → 54 with the release linked as the trigger. Reviewer updates the affected sections, score returns to 100.
🔌 An API endpoint is deprecated
What happened Your backend team deprecates /api/v1/payments in favour of /api/v2/payments and ships the change.
What Rasepi does Surfaces 3 integration guides that reference the deprecated endpoint: “Referenced API endpoint deprecated in latest deployment.”
Trust score Each guide drops proportionally. The one that documents the endpoint directly falls from 88 → 34. Others get a warning signal.
🔧 Your team migrates a critical tool
What happened Your infrastructure team migrates monitoring from Datadog to Grafana. The switchover completes on a Tuesday afternoon.
What Rasepi does Flags 12 runbooks and on-call guides that reference Datadog dashboards, alert rules, and escalation paths: “Monitoring tooling changed. Datadog references may be outdated.”
Trust score All affected docs drop immediately. The on-call team sees the warnings before the next incident.
🚨 A post-incident review identifies gaps
What happened A severity-1 incident reveals that the incident response playbook missed a critical escalation step. The postmortem is filed.
What Rasepi does Links the postmortem to the runbook and flags it: “Post-incident review INC-4821 filed. Playbook gaps identified.”
Trust score Drops from 72 → 28. The on-call lead gets notified and updates the playbook before the next rotation.
🔐 A security vulnerability is published
What happened A critical CVE is published for a library your setup guide recommends installing.
What Rasepi does Flags the setup guide: “CVE-2026-31415 published for recommended dependency. Review installation instructions.”
Trust score Drops to 19. The guide is deprioritised in search and AI results until the team updates the recommendation.

What feeds into the trust score

Every trust score is a composite of internal and external signals. Each one is weighted, tracked, and visible in the score breakdown.

🔌 External source changed (release, deprecation, config update, CVE)
🚨 Post-incident review linked to a document
🔗 Links inside the document are broken or redirected
📝 Hasn't been reviewed while its expiry window approaches
🌐 Source language updated but translation versions weren't
💬 Readers have flagged the content as outdated
👁️ Low readership in the trailing window
Every signal is attached to the score as evidence. When a trust score drops, you see the reason, not just a number. “Score dropped from 91 to 54 because v2.0 was released and this guide still references v1.x.” That is the difference between a metric and an answer.

Machine-readable trust for your AI tools

If you're deploying AI copilots, RAG pipelines, or enterprise search, you need a way to tell those tools which documents are worth citing. Trust scores give them that signal.

Set a trust threshold: “AI assistants may only cite documents with trust ≥ 0.8, reviewed within 30 days, and not flagged by an external change.” Rasepi enforces it. Your AI assistant never confidently quotes a doc that references a deprecated API.

Trust metadata is available via API, MCP server, and webhooks. Plug it into your RAG pipeline, your enterprise search ranker, or your internal AI governance layer. Every answer your AI gives can carry a trust signal.

Rasepi is API and AI first. The web interface is one client. The REST API, MCP server, and webhook system are equally capable. Your internal tools, CI pipelines, and AI assistants all consume the same trust data through the same endpoints. Explore the developer docs →

This is not about adding AI to docs. It is about making docs safe for AI. Without trust metadata, AI tools amplify outdated information with full confidence. With it, they know what to cite, what to warn about, and what to skip.

Policy-driven expiry, not calendar reminders

Different content has different shelf lives. Rasepi lets you define expiry policies by document type and enforces them automatically.

🔧 Runbooks: 30 days

Fast-changing operational docs get short expiry windows. When tooling changes, the runbook gets flagged immediately. When it doesn't, it still gets reviewed monthly.

📜 HR & compliance: 180 days

Stable policy docs don't need monthly reviews. But they do need to be checked when regulations change. Rasepi handles both: scheduled expiry and external triggers.

💰 Pricing playbooks: 7 days

Competitive pricing changes fast. Weekly expiry ensures sales teams always have current numbers. Product config changes trigger immediate flags regardless of the schedule.

✅ Attestation for high-stakes content

For compliance SOPs and security procedures, require the reviewer to formally attest that they've checked the content. The attestation is logged, timestamped, and tied to the trust score.

This isn't how anyone else does it

📅 Scheduled verification

Guru and Tettra let you set a verification interval. Every 90 days, someone clicks “still accurate.” If a breaking change shipped yesterday, nobody knows until the next review cycle. The badge says “verified.” The content is already wrong.

📚 No freshness at all

Confluence and Notion don't track freshness. A page updated 3 years ago looks exactly like one updated today. Their AI features index everything with equal confidence. Atlassian Intelligence can summarise a page, but it cannot tell you whether the page is still true.

Rasepi treats knowledge maintenance like infrastructure monitoring. You don't wait for someone to notice your server is down. You detect signals and propagate alerts. Documentation works the same way: detect change, identify impact, surface evidence, resolve.

Stop trusting a “verified” badge

Rasepi gives every document a trust score backed by evidence. Your team sees why a doc was downgraded. Your AI tools know what's safe to cite. Nobody follows a runbook that references a tool you replaced six months ago.

Rasepi is in private beta. We're inviting teams in waves.