Back to Home
Rachel Ankerholz · AI Strategy

AI Readiness Assessment

Enterprise & Regulated Organization Edition. Complete this 31-question assessment to understand your organization’s current AI governance posture. Scores are used to generate your AI Readiness Report Card.

0 of 31 answered
Governance & Policy

Does your organization have a formal AI governance policy?

Who owns AI risk and accountability in your organization?

Has your leadership team received AI literacy training in the past 12 months?

Does your board of directors receive regular reporting on AI risk, compliance status, and strategic AI initiatives?

Do employees have a documented acceptable use policy for AI tools, and has it been communicated in the past 12 months?

Do you have a policy and detection mechanism for employees using unsanctioned AI tools (Shadow AI)?

Risk & Accountability

Have you audited your AI tools for bias, fairness, or disparate impact?

Do you have a vendor AI risk assessment process before adopting new tools?

Can your organization explain how AI-driven decisions are made to stakeholders?

Can your team produce decision logs for any AI-influenced outcome within 24 hours if a regulator or court requests them?

Are there defined human review and override requirements for high-stakes AI-driven decisions (e.g., hiring, benefits, lending, healthcare)?

Agent Authorization & Control

Do your AI agents have documented spending limits and action boundaries?

Can you revoke an AI agent’s access across all connected systems immediately?

Do you know which critical workflows would be disrupted if your primary cloud provider went offline or was sanctioned?

Have your AI systems been tested for adversarial attacks, prompt injection, or data poisoning vulnerabilities?

Compliance & Data

Do you know which AI tools are processing sensitive or regulated data?

Has your organization addressed AI in your data privacy and consent frameworks?

Have you mapped your AI systems against applicable regulatory frameworks (e.g., EU AI Act risk tiers, NIST AI RMF, ISO 42001)?

Have you assessed AI-specific compliance obligations under the regulations governing your industry (e.g., HIPAA, GLBA, FERPA, SEC disclosure rules)?

Do your AI vendor contracts include provisions for audit rights, data deletion, model change notifications, and subprocessor disclosures?

Have you assessed whether your existing vendor software (ERP, CRM, HR platforms, etc.) contains embedded AI features that process your data?

Do you know what data was used to train the AI models you deploy, and whether that data included sensitive or regulated information?

Accessibility & Inclusion

Is your organization’s public-facing digital presence ADA/WCAG 2.1 compliant?

Do your AI systems include equity and inclusion considerations in their design and testing?

Has your organization conducted an accessibility audit of AI-generated content in the past 12 months?

Do you have a process for ensuring AI tools don’t create new accessibility barriers for employees or customers?

Model Performance & Security

Do you have a process for monitoring AI system performance over time, including accuracy drift and output quality degradation?

Does your cyber liability or professional liability insurance explicitly address AI-related incidents?

Incident Response

Do you have an AI incident response plan for when something goes wrong?

Have you conducted a tabletop exercise simulating an AI incident in the past 12 months?

Executive Readiness

How confident is your C-suite in your AI readiness if regulators, auditors, or a court asked tomorrow?

Want to go deeper?

Rachel works with leadership teams on AI governance strategy, readiness assessments, and building accountability infrastructure.

Get in Touch