The observation
Italian SMEs already have access to AI tools. Most of them use GitHub Copilot, Claude, or ChatGPT individually, with no shared standards, no governance, no way to measure what is actually working. Nobody knows what the other person is using. Nothing becomes institutional knowledge. The problem is not tool availability. It is adoption without coordination.
I ran 30+ structured interviews with Italian SME owners and operations managers across 2025 and 2026. The same pattern came up every time: they do not want "AI". They want to solve 3 specific recurring operational problems, see the impact in under 30 days, and not have to think about the infrastructure behind it.
What I built, and how
Not as a PM writing specs. I designed the API architecture before writing a line of implementation code, configured the retrieval engine, built and tested the agentic workflows on n8n, integrated Claude API as the main LLM with governance constraints, and validated the endpoints directly, including error handling and edge case behavior.
The product lets a small company load its operational documents: contracts, procedures, internal knowledge, recurring workflows. The system indexes them adaptively, builds a conceptual graph connecting entities and concepts across documents, and lets anyone in the company query that knowledge base with the AI grounding every answer strictly on what was retrieved. If the information is not there, the system says so.
The retrieval architecture: AREX
The core engine is AREX (Adaptive Retrieval for Extended Experience), an advanced RAG architecture developed by researchers Samuele Pretini and Giorgia Lentoni (Milan, 2025). I adopted it as the technical foundation and applied it to the Italian SME context.
A standard RAG system has three structural problems: the schema is decided before the content is known, there is no memory of past interactions, and the LLM can still generate answers not anchored to the retrieved evidence. AREX addresses all three.
Metadata is induced dynamically at ingestion time, adapting to each document's actual structure rather than forcing it into a preset schema. Retrieval runs in parallel across four channels: keyword search on metadata and text, sparse lexical search for exact matches, dense semantic search on vector embeddings, and graph traversal on a conceptual graph connecting entities and concepts across documents. A reasoning agent integrates the results and constrains output strictly to retrieved evidence. An interaction experience graph records every query, response, and correction, so the system improves over time without retraining the LLM.
What the interviews taught me about SME adoption
The primary fear is not cost. It is making a wrong business decision because the AI invented something plausible. The second fear is time: another tool that requires onboarding they do not have capacity for. The third is data sovereignty.
Those three fears shaped the product directly. Grounded-only output with source citations handles the first. A no-code document loading interface handles the second. A self-hosted deployment option handles the third.
"The SMEs I interviewed did not want AI. They wanted to stop losing hours every week to a process that had a better answer sitting in a document nobody could find." — From the research, 2025
Results
- Working MVP with real users in 2 months from architecture to production
- 30+ structured interviews conducted before and during build
- 15+ competitive platforms benchmarked including Zapier, Make, n8n, Microsoft Power Automate, Notion AI, ChatGPT Enterprise
- Zero hallucinated responses in the first production period: all outputs anchored to retrieved evidence with source references