Openness to AI
Appetite and adaptability: willingness to experiment, iterate, and adapt workflows as AI capability evolves.
AI Readiness & Enablement
AI adoption is not primarily a technology challenge. It is a human capability challenge. Tool access is no longer scarce. Judgement, discipline, structure, and measurable performance are.
The framework combines two complementary components: AI Orientation Survey (mindset and behavioural readiness) and AI Capability Assessment (observable AI-enabled performance).
Readiness
This is not a technical audit. It answers a foundational question: do our people want to use AI, and how are they thinking about it?
Appetite and adaptability: willingness to experiment, iterate, and adapt workflows as AI capability evolves.
Judgement under uncertainty: ability to avoid both blind trust and risk paralysis while navigating privacy, bias, and ethics.
Confidence alignment: whether perceived skill levels match actual limits, dependency patterns, and decision quality.
Survey outputs provide a readiness snapshot by individual and team, highlight judgement risk concentration, and identify where enablement should start.
Performance
The AI Capability Assessment measures whether people can deploy AI effectively, responsibly, and consistently enough to create measurable value.
Adoption engine
Sustained drive to explore, test, and refine AI use through deliberate experimentation.
Drives adoption and long-term capability growth.
Scale engine
Ability to turn AI interaction into structured, repeatable workflows that teams can scale.
Drives scale and consistency.
Protection engine
Disciplined verification of AI outputs for quality, logic, risk, and contextual suitability.
Drives protection and disciplined judgement.
Impact engine
Ability to deploy AI where it creates measurable improvement in performance and decisions.
Drives impact and measurable results.
Scoring and outputs
The model maps where people and teams are now, then defines a practical progression path for improving capability over time.
Maturity levels
Experimental
Trial
Productive
Repeatable
Scalable
Integrated
Leading
Differentiated
Individual outputs
Capability profile, strengths, blind spots, and targeted development priorities tied to role expectations.
Team outputs
Capability heatmap, adoption risk flags, and a focused enablement plan to lift execution quality.
HOW TO APPLY IT
When to use: Before hiring into AI-exposed roles where judgement quality matters.
Decision: Which candidates can deliver AI-enabled performance, not just tool familiarity.
Output: Comparative capability profiles and hiring risk flags.
When to use: Before scaling AI usage across teams or business units.
Decision: Where capability is strong, uneven, or exposed.
Output: Team heatmaps and clustered risk patterns.
When to use: When setting AI-enabled operating standards at leadership level.
Decision: Where to invest first for measurable uplift and lower risk.
Output: Priority-aligned enablement plan and governance rhythm.
Enablement support
We can tailor the model to your teams, risk profile, and business priorities.