Most AI procurement frameworks ask the wrong question. They ask “Is this tool safe?” when the more consequential question is “What decisions does this tool make, and who in this organization is accountable for them?”
The distinction matters because safety is an engineering claim — it can be certified, audited, disclaimed. Accountability is a governance claim. It requires that a human being, in a defined role, with defined authority, can explain and if necessary reverse any decision the system produces.
The organizations that navigate AI procurement well aren’t those with the most sophisticated technical review processes. They’re the ones that have mapped their decision architecture — who decides what, and on what basis — before they start evaluating vendors. That’s the work the Public Values Audit Matrix is designed to support.