Phronisi.
§ Lead Diagnostic · OSFI E-23 Readiness
§ OSFI Guideline E-23 · Model Risk Management · Effective 1 May 2027

Are you ready for E-23?

OSFI's revised Guideline E-23 expands model-risk management to include all analytical methods affecting decisions — AI, machine learning, and qualitative models, not only traditional credit and capital models. Most federally regulated financial institutions are running multiple AI models today without the governance E-23 will require. This is a confidential 5-minute self-assessment across eight dimensions. No data is stored. Nothing is sent anywhere. Score yourself honestly.

8 dimensionsSelf-assessment
~5 minTo complete
0-24 score+ maturity tier
3 gaps+ recommended actions
/ 01

Governance & accountability

E-23 requires named, documented accountability for model risk at the board level and below. Has your institution assigned and documented who owns model risk — including AI/ML models?

Does a named senior executive (CRO, CFO, or equivalent) hold documented accountability for model risk — including AI/ML models — with explicit reporting to the board?
/ 02

Model inventory & identification

E-23 requires a complete inventory of all models in use, with each model classified by risk tier. The expanded scope explicitly captures AI, ML, and qualitative models — not only traditional credit/capital models.

Does your institution maintain a complete, current model inventory that captures AI/ML models alongside traditional models, with each entry tier-rated for risk?
/ 03

Risk-tier methodology

E-23 requires a documented methodology for rating each model's materiality and risk — driving the depth of validation, review frequency, and governance attention each model receives.

Is there a written, board-approved methodology for tiering models by materiality and risk, with the tier of each model driving its validation depth and review cadence?
/ 04

Validation processes

E-23 requires effective challenge through validation — including conceptual soundness review, outcomes analysis, and ongoing performance monitoring. AI/ML models need validation approaches that handle non-deterministic outputs.

Are AI/ML models validated through conceptual review, outcomes analysis, and challenge testing — with methods that explicitly address non-deterministic and learning-based behaviour?
/ 05

Independent review

E-23 emphasises effective challenge from a function independent of the model developers and users — with sufficient AI/ML technical capability to provide meaningful scrutiny, not just procedural compliance.

Is there an independent review function with both organisational distance from model developers and technical capability to challenge AI/ML model design and behaviour?
/ 06

Ongoing monitoring & change

E-23 requires continuous monitoring of model performance, drift detection, and disciplined change management. For AI/ML models, monitoring must capture performance drift, data drift, and behavioural change between versions.

Are AI/ML models continuously monitored for performance drift, data drift, and behavioural change, with formal change-control gating retraining and redeployment?
/ 07

Documentation standards

E-23 requires documentation sufficient for an independent reviewer to understand each model's purpose, design, data, limitations, and use. For AI/ML, this includes data lineage, training methodology, and limitation statements.

Is every AI/ML model documented to a standard that allows an independent reviewer to understand purpose, design, data, limitations, and operational use — without consulting the development team?
/ 08

AI/ML-specific provisions

E-23's revisions are particularly attentive to AI/ML risk: explainability, bias and fairness assessment, generative AI controls, and the boundary between human judgment and model output. This is the dimension institutions most often underestimate.

Has your institution formally addressed AI-specific risks — explainability requirements, bias and fairness testing, generative AI controls, and human-in-the-loop boundaries for each material AI/ML model?
§ Your E-23 readiness score
0/24

Your top three gaps to close

Want a structured plan to close the gaps?

Phronisi offers a 90-minute E-23 readiness review — complimentary, no obligation. We will walk through your gap profile, prioritise remediation, and outline what a focused readiness sprint would look like for your institution. Booking Q3–Q4 2026.

Book a session arthur@phronisi.app · 90 min · no obligation