Purpose
Shadow AI has become shadow IT with memory, tools, and data retention. Bans rarely work, while unmanaged agents can leak sensitive data or act with unclear authority.
A policy scanner that inventories declared agent capabilities, data classes, tools, and retention behavior, then produces governance findings and safer migration paths.
What it does
Validates a domain-specific AI governance packet, scores readiness, and returns concrete findings that contributors can improve.
Why it matters
AI systems are moving from chat into action. This project makes one hard operational risk easier to inspect, test, and govern in public.
Who should use it
Discover and govern unsanctioned AI agent usage without blocking useful work. Builders can start with the CLI, then add adapters, fixtures, schemas, and integrations.
Quick Start
npm test
npm start -- sample
Example Packet
{
"agent": {
"name": "sales-helper",
"owner": "growth"
},
"data": {
"classes": [
"customer_email",
"contract_terms"
],
"retentionDays": 90
},
"tools": [
{
"name": "gmail",
"permission": "send"
},
{
"name": "crm",
"permission": "write"
}
],
"approvals": {
"dpoReviewed": false
}
}
Contribution Tracks
Good first issues
- browser extension telemetry
- CASB integrations
- SaaS discovery adapters
- policy-as-code packs
Core improvements
- Add JSON Schema validation.
- Add more real-world, non-sensitive fixtures.
- Improve scoring transparency and edge-case tests.
Integration work
- Build adapters for common AI frameworks.
- Add CI checks and report exports.
- Connect the packet format to operational workflows.