Reference

Concrete projects connected to this lab. Each entry explains what the project is, why it exists, and what it taught me.

These are not showcases. They are reference points — things I built that informed how I think about AI-assisted development, system design, and the boundary between automation and human judgment.


Agenzio

What it is: A starter and generator for production-grade platforms. It scaffolds a baseline project with infrastructure, backend, frontend, and deployment pipeline, following explicit conventions and module boundaries.

Why it exists: I needed a repeatable way to go from zero to a working, deployable system without accumulating structural debt in the first hours. Every project I started from scratch repeated the same decisions — authentication setup, CI/CD wiring, folder structure, naming. Agenzio encodes those decisions so they only need to be made once.

What it taught me: The value of a generator is not the code it produces but the decisions it embodies. When those decisions are coherent, everything downstream — including AI-assisted extension — stays aligned. When they are vague, the generated code drifts within days.


Medical Agenda

What it is: A healthcare scheduling platform used in a real clinical environment. It manages appointments, patient data, and operational workflows for a medical practice.

Why it exists: A specific practice needed a scheduling system that fit their workflow, not a generic SaaS product that required them to adapt. I built it to solve a concrete problem for a concrete user, and then used it as a testing ground for AI-assisted features in a regulated domain.

What it taught me: Regulated domains expose the limits of AI-assisted development clearly. The AI can generate CRUD operations and UI components efficiently, but anything involving data privacy, clinical logic, or compliance requires careful human oversight. The most important lesson: AI acceleration is real, but it does not reduce the need for domain knowledge. It amplifies the consequences of not having it.


ALLVI

What it is: An experimental project developed in collaboration with external partners. It explores how structured platforms and AI agents can support domain-specific workflows at scale.

Why it exists: I wanted to test whether the patterns I developed in smaller projects — constrained AI agents, explicit module boundaries, human-in-the-loop decision points — could work in a larger, multi-stakeholder context.

What it taught me: Coordination cost grows faster than technical complexity. The patterns held, but the effort required to align multiple people on shared conventions was significantly higher than the effort to implement them. In collaborative projects, the architecture of communication matters as much as the architecture of code.