Blog  /  Strategy

How to Implement AI in Your Company:
A Practical Guide for Delivery Managers

Most AI projects in companies do not fail because of technology. They fail because of delivery. There are no clear KPIs, no defined ownership, and no structured process to take the prototype into production.

McKinsey (2024) reports that 70% of AI projects never reach production. In organizations where decision cycles are long and data maturity is low, that number is likely higher. I have worked on AI projects across energy, aerospace, logistics, and Italian SMEs. I have watched well-funded projects fail for avoidable reasons.

This guide is for Delivery Managers, IT leads, and operations managers who need to get AI into production. Not just onto a slide.

The 5 most common mistakes in company AI implementation

01
Starting with the wrong use case
The first AI project needs to be visible, measurable, and low operational risk. Do not start with a critical or regulated process. Pick something that still works even if the model is wrong 20% of the time.
02
Data that is not ready
60% of time in an AI project goes to data quality and preparation (Gartner, 2024). If the data is not structured, accessible, and documented before you start, the project will stall during the pilot.
03
A pilot with no exit KPIs
A pilot without defined exit KPIs becomes an open-ended research project. Before starting, decide: "If we reach X, we go to production. If we do not reach it within Y weeks, we stop." No ambiguity.
04
No LLM governance
Models change versions. Prompts that worked in v1 can degrade in v2. Without model versioning, prompt regression testing, and output validation, you discover problems after they are already in production.
05
Forgetting change management
AI changes how people work. Without an adoption plan that includes communication, training, and a feedback loop, the tool gets built and not used. I have seen projects with 70% AI adoption and projects with 10%. The difference was change management, not the technology.

The three-phase framework: Assess, Pilot, Scale

This is the framework I use on my projects. It draws from Lean Startup thinking and SAFe value management approaches. It is practical and it works.

Phase 01
Assess — 2 weeks
Identify the right use case and verify data readiness. Map candidate processes, score them on impact and complexity, audit data quality, and define the pilot success KPIs. Output: a scoping document with the selected use case, defined KPIs, and pilot plan.
Phase 02
Pilot — 4 to 8 weeks
The pilot's goal is to prove the use case works with real data under controlled conditions. Build the working prototype, test it with real users (not just the IT team), measure results against defined KPIs, and identify operational and compliance risks. The pilot closes with a validation report and a clear recommendation: go or no-go for scale.
Phase 03
Scale — 3 to 6 months
This phase takes the pilot to production and builds the infrastructure to expand. The work includes production deployment, LLM governance (model versioning, output validation, audit trail), change management and user training, and continuous model performance monitoring. Output: an AI system in production with measured KPIs, documented governance, and a roadmap for expansion to other use cases.

A real example: AI automation platform for Italian SMEs

In 2025 I built an AI automation platform for Italian SMEs from scratch, as architect and builder, not as a PM writing specs. The use case: let small companies query their own internal knowledge base (contracts, procedures, operational documents) without the AI inventing answers.

The three phases ran like this:

  • Assess (2 weeks): 30+ structured interviews with Italian SME owners to map real operational pain points. Findings changed what got built and what got dropped. Competitive analysis of 15+ automation platforms to identify where existing tools fell short for this market.
  • Pilot (4 weeks): Built and tested the core retrieval pipeline with real documents and real users. KPI: zero hallucinated responses, grounded output on every query. LLM governance layer built in from day one.
  • Scale (ongoing): Working MVP in production in 2 months from architecture start. Real users, real feedback, continuous improvement through an interaction experience graph that learns from validated responses.

Result: working MVP in 2 months, zero hallucinations in the first production period. See the full case study.

2
Months from architecture to production MVP
30+
User interviews before and during build
0
Hallucinated responses in first production period

Recommended tools for AI implementation

  • LLMs: Claude (Anthropic) for complex reasoning and governance-heavy use cases, GPT-4o for Microsoft ecosystem integration.
  • Workflow orchestration: n8n (self-hosted, GDPR-friendly) for agentic workflows, Zapier for lightweight integrations.
  • Delivery: Jira for AI task tracking, Azure DevOps for CI/CD pipelines, GitHub for prompt versioning.
  • Monitoring: Langfuse or Helicone for LLM performance monitoring in production.

"An AI project without a dedicated Delivery Manager becomes a perpetual experiment. The difference between a prototype and a product is the delivery process." — Pedro Pizarro

How do you implement AI in a company?
In three phases: Assess (identify the right use case and verify data readiness), Pilot (4-8 weeks with defined KPIs), Scale (take it to production with LLM governance and change management). The key is setting exit KPIs before the pilot starts, not during it.
What does an AI pilot project cost?
A well-scoped AI pilot (4-8 weeks) typically costs between €15,000 and €50,000, including consulting, development, and API costs. According to Gartner, investing in AI governance from the start reduces remediation costs by 60%. A fractional AI Delivery Manager for the pilot phase can reduce project failure risk by 40-60%.
How long does AI implementation take?
A well-structured pilot takes 4-8 weeks. The scale phase, bringing the pilot to production, typically takes 3-6 months. The most reliable warning signs of a failing project: a pilot that stretches past 3 months without clear KPIs, and no defined delivery ownership.