AMI v1.0 Assessment Kit
Everything you need to evaluate an AI agent system using the Agent Maturity Index. Includes the full rubric, JSON schema, assessment template, and an LLM-ready self-assessment prompt.
Download Kit (.zip)Kit Contents
README.md
Overview and usage guide
ami-v1-rubric.md
Full scoring rubric with per-dimension criteria
ami-v1-schema.json
JSON Schema for assessment validation
ami-v1-profiles.json
Compliance profile definitions
ami-assessment-template.json
Blank assessment template
ami-self-assessment-llm-prompt.txt
LLM prompt for automated self-assessment
submission-guidelines.md
How to submit for official review
Self-Assessment vs Verified Review
| Self-Reported | Published (Official) | |
|---|---|---|
| Run by | You or your team | Autonomy Index editorial board |
| Review state | draft | published |
| Reviewer signatures | None | SHA-256 signed |
| Listed on index | No | Yes |
| Profile compliance | Self-checked | Must pass prod-general-v1 |
Request an Official Review
To get your system listed on the AMI index:
- Complete a self-assessment using the kit above
- Ensure all evidence is real, cited, and publicly verifiable
- Validate that your assessment passes
community-basic-v1at minimum - Submit the assessment JSON for editorial review
The editorial board verifies evidence, adjusts scores if needed, and publishes the assessment with reviewer signatures and integrity hash.
Compliance Profiles
Machine-Readable API
Access AMI data programmatically:
GET /api/ami/rubric Full rubric with dimensions, weights, and scoring criteria
GET /api/ami/schema JSON Schema for assessment validation
GET /api/ami/profiles Compliance profile definitions
GET /api/ami/validate?assessmentId=...&profile=... PASS/FAIL evaluation