Open-source AI infrastructure

Open Eval Courtroom

Citation and claim review packets for legal and policy AI outputs.

JavaScript MIT licensed Offline by default Community extensible

Purpose

Legal and policy AI tools can sound authoritative while hallucinating citations or applying stale law.

A review-packet validator that records claims, citations, jurisdiction, date checked, reviewer role, and unresolved authority gaps.

What it does

Validates a domain-specific AI governance packet, scores readiness, and returns concrete findings that contributors can improve.

Why it matters

AI systems are moving from chat into action. This project makes one hard operational risk easier to inspect, test, and govern in public.

Who should use it

Citation and claim review packets for legal and policy AI outputs. Builders can start with the CLI, then add adapters, fixtures, schemas, and integrations.

Quick Start

npm test
npm start -- sample

Example Packet

{
  "output": {
    "topic": "tenant repair rights",
    "jurisdiction": "CA"
  },
  "claims": [
    {
      "text": "landlord must repair habitability issues",
      "citation": "Cal. Civ. Code 1941"
    }
  ],
  "review": {
    "checkedAt": "2026-04-30",
    "reviewer": ""
  }
}

Contribution Tracks

Good first issues

  • citation resolvers
  • jurisdiction packs
  • court rule freshness checks
  • plain-language review

Core improvements

  • Add JSON Schema validation.
  • Add more real-world, non-sensitive fixtures.
  • Improve scoring transparency and edge-case tests.

Integration work

  • Build adapters for common AI frameworks.
  • Add CI checks and report exports.
  • Connect the packet format to operational workflows.