← Back to blog

By Evoloop

/

30 March 2026

/

7 min read

/AI Governance

Is Your Firm's AI Actually Ready for Production? An Honest Checklist

Deploying an AI tool and deploying an AI tool that is production-ready are not the same thing. The difference matters more in professional services than in almost any other sector, because the consequences of AI failure here are not just inconvenient. They carry regulatory, legal, and reputational consequences.

This checklist is designed to give an honest picture of where your firm stands.

Data

  • Do you know exactly where the data used to train or fine-tune your AI came from?
  • Has that data been checked for accuracy, completeness, and bias?
  • Has any personally identifiable or confidential client information been properly anonymised before being used?
  • Do you have a process for detecting when the model's training data has become outdated relative to current legislation or practice?
  • Can you trace a problematic output back to a data source?

If any of these cannot be answered clearly, there is data debt in your system.

Model

  • Is your AI model under version control? Do you know which version is running in production?
  • Do you have a documented process for testing a new model version before it replaces the current one?
  • Do you have a rollback procedure if the model starts behaving unexpectedly?
  • Are you monitoring for model drift, meaning gradual changes in output quality or behaviour over time?
  • Has the model been tested against adversarial inputs?

Vendor-managed models are updated without always notifying clients of what has changed. Without version tracking and rollback capability, unexpected behaviour changes cannot be managed effectively.

Prompts and guardrails

  • Is your system prompt documented and stored somewhere accessible?
  • Has it been reviewed and tested by someone other than the person who wrote it?
  • Is there input validation in place to check what users are submitting to the system?
  • Has the system been tested for prompt injection vulnerabilities?
  • Are outputs checked before they reach the user, either through an AI gateway or content filtering?
  • Is there a documented list of what the system should refuse to do, and has that been verified through testing?

An undocumented system prompt is a system operating on untested assumptions. Prompt injection is a real and documented attack vector, not a theoretical one.

Governance

  • Is there a named owner for this AI system with clear accountability for its behaviour?
  • Is there a written policy covering acceptable use, data handling, and escalation procedures?
  • Has the system been red-teamed, meaning deliberately tested to identify unexpected or harmful behaviour?
  • Is there a process for staff to report concerns or anomalies?
  • Has the system been tested under realistic usage volumes?
  • Is there a documented plan for taking the system offline if required?

Governance is the category most often absent in rushed deployments. Without it, problems surface slowly and are remediated expensively.

What to do with the results

If there are gaps across multiple categories, that is a common position for firms that deployed AI quickly in the last two years. The question is not whether the gaps exist but how to address them in order of priority.

Evoloop's AI Readiness and Workflow Audit works through this assessment systematically, identifies the highest-risk gaps in your specific setup, and produces a prioritised remediation plan.

Ready to explore AI for your business?

Three ways to get started:

  • Book a Workflow Review - 30-minute assessment of where AI fits your practice
  • Apply for the Founding Client Programme - reduced-price pilot for 2 firms
  • See the AI Readiness Audit - structured discovery and roadmap